You’re using a public version of DrugPatentWatch with 5 free searches available | Register to unlock more free searches. CREATE FREE ACCOUNT

Last Updated: April 25, 2024

Claims for Patent: 8,116,527


✉ Email this page to a colleague

« Back to Dashboard


Summary for Patent: 8,116,527
Title:Using video-based imagery for automated detection, tracking, and counting of moving objects, in particular those objects having image characteristics similar to background
Abstract: A system and method to automatically detect, track and count individual moving objects in a high density group without regard to background content, embodiments performing better than a trained human observer. Select embodiments employ thermal videography to detect and track even those moving objects having thermal signatures that are similar to a complex stationary background pattern. The method allows tracking an object that need not be identified every frame of the video, that may change polarity in the imagery with respect to background, e.g., switching from relatively light to dark or relatively hot to cold and vice versa, or both. The methodology further provides a permanent record of an \"episode\" of objects in motion, permitting reprocessing with different parameters any number of times. Post-processing of the recorded tracks allows easy enumeration of the number of objects tracked with the FOV of the imager.
Inventor(s): Sabol; Bruce M. (Vicksburg, MS), Melton; R. Eddie (Vicksburg, MS)
Assignee: The United States of America as represented by the Secretary of the Army (Washington, DC)
Application Number:12/575,073
Patent Claims:1. A method for tracking bats, comprising: a) employing at least one specially configured computer having computer readable storage media at least some of which contains specialized software implementing at least one algorithm; b) capturing video images of said bats by employing at least one digital imaging device, said at least one digital imaging device in operable communication with said at least one specially configured computer; c) employing at least some said specialized software implementing a first said at least one algorithm to create at least one synthetic adaptive temporal background within the field of view (FOV) of said at least one digital imaging device, said synthetic adaptive temporal background at least removing clutter, wherein synthetic target-free background images are generated as said at least one synthetic temporal background at a time interval and number of sequential source images specified by a user, and wherein the value for each pixel in said synthetic target-free background is determined by taking the mode of the histogram of values of said pixels for each location within a said video image; d) collecting on at least said computer readable storage media at least said video images of said bats, said video images made available as pixels arranged in video frames; wherein at least said video images of said bats are sent to said computer for processing using at least some of said specialized software; e) for each said bat imaged by said at least one digital imaging device, differencing said pixels in said video frames sequentially by subtracting a current said synthetic temporal background to yield differenced said pixels as a differenced image; f) taking the absolute value of each resultant said differenced image, wherein said absolute value is taken to eliminate the effects of polarity; g) thresholding, to a user-specified threshold, those of said differenced pixels at the tail end of the distribution of said differenced pixels, wherein the location and value of said differenced pixels are saved to a detected pixel report for subsequent processing, and wherein said imaged bats are each identifiable as an individual said pixel cluster of said pixels in said video frame of thresholded differenced pixels, and wherein a track of a said imaged bat is established if two said individual pixel clusters representing an individual said bat exhibit similar size in two successive video frames of said differenced thresholded pixels; h) applying a standard "region growing" technique to find discrete contiguous said pixel clusters to be associated with each candidate said bat, wherein applying said "region growing" algorithm establishes a cluster of contiguous single-polarity pixels that identifies an individual said bat, and wherein the center location of said pixel cluster, number of said pixels, and boundary dimensions of said pixel cluster are saved for subsequent processing, and wherein said center location [X, Y] of each said pixel cluster is determined by taking an average of all locations of said pixels within said pixel cluster weighted by a respective absolute difference value; i) establishing said two pixel clusters of a similar size in at least two successive video frames as location pairs; and j) updating and labeling each said location pair as a motion vector in each subsequent said differenced thresholded video frame, wherein said updating is used to predict a next position of each said imaged bat; and k) iterating steps d) through j) said video frame-by-said video frame for each said FOV and respective synthetic temporal background to generate an output of individual said tracks of each said imaged bat represented in said video frames, wherein said method enables simultaneous tracking of multiple bats, and wherein said bats may have a thermal signature in a range that is approximately equal to the range of the thermal signature of said stationary background in the FOV of said digital imaging device, and wherein behavior of said multiple bats may be chaotic.

2. The method of claim 1 further enumerating a list of said tracks as either on an emergent list or on a return list, comprising: specifying a polygon within said FOV; classifying each said track that originates on the inside of said polygon and terminates on the outside of said polygon as an emergent track; and incrementing by one said emergent list for each said track classified as an emergent track; and classifying each said track that originates on the outside of said polygon and terminates on the inside of said polygon as a return track; and incrementing by one said return list for each said track classified as a return track; and labeling any objects remaining as unclassified.

3. The method of claim 2, further comprising differencing said emergent list and said return list to yield a net flow count.

4. The method of claim 1, said tracking algorithm further comprising: establishing four lists, said lists comprising: a Pixel Cluster List, wherein said Pixel Cluster List is loaded for each selected said video frame; a Potential List; a Tracking List; and a Target List, wherein when said pixel cluster is identified to a specific candidate said bat, said pixel cluster is removed from said Pixel Cluster List; implementing four processes, said processes comprising: Tracking Current Targets, wherein said Tracking Current Targets process predicts a search location and radius using a computed motion vector to select a candidate said bat in the current video frame that best fits user-specified search criteria, and wherein said Tracking Current Targets process matches said pixel clusters for all candidate bats on said Tracking List; and Identifying New Targets, wherein said Identifying New Targets process searches for new bats to track using a said search radius based on size; Identifying New Potential Targets, wherein said Identifying New Potential Targets process clears unmatched said bats from said Potential List, creates new candidate objects for each said pixel cluster remaining in said Pixel Cluster List and places all unmatched said bats and said candidate objects in said current video frame on a potential list for input into a next process for next said video frame and adds the candidate bats to said Potential List; and Identifying Completed Tracks, wherein said Identifying Completed Tracks process accepts an input from said Potential List, and wherein said Identifying Completed Tracks process identifies any said bats on said Tracking List that have not had any recent track updates and, based on processing rules, discards said bats without recent track updates or moves said bats without recent track updates to said Target List, wherein said tracking algorithm processes detected said bats across said video frames, producing a time and spatial history for each.

5. The method of claim 4, further computing a set of attributes associated with each said bat identified to a said pixel cluster, said attributes selected from the group consisting of frame number, time, clock time, number of said pixels in said pixel cluster, height and width (in pixels) of each said pixel cluster, and combinations thereof, wherein said set of attributes is passed forward to said tracking algorithm for each said identified pixel cluster in each said differenced video frame.

6. The method of claim 4, said processes further comprising: calculating a motion vector for each said track by differencing a last two known positions of said bat, adjusting said motion vector for the number of said video frames, n, since a previous detection of said bat and the number of frames, m, since a most recent detection of said bat; computing a search radius by multiplying the magnitude of said motion vector by a user-specified constant, K; predicting a new position for each said current track by computing a predicted position, wherein said predicted position is computed to be where said current track would extend without any deviation in said motion vector associated therewith, and wherein said K is a maneuverability factor representing the ability of said candidate bat in motion to change speed and direction and is selected to accommodate deviations in said motion vector, and wherein said predicted position is computed using a current location and said computed motion vector, and wherein said predicted position is computed by summing current coordinates and vector increments, and wherein said radius is set to the maximum of either said computed product or a minimum allowed radius, and wherein said minimum radius accommodates said bats that are at the outer range limit of detection and are moving at a rate slower than expected for said bats; using said predicted location and said tracking radius while cycling through said Pixel Cluster List of a current said video frame and calculating a distance between said predicted position and a center position of each said pixel cluster in said current video frame, wherein if said distance is within said computed search radius, a difference in pixel counts for each said bat in said current video frame is calculated for comparison, and wherein if multiple said bats, each identified as one of said pixel clusters, are found within said search radius, a said search radius that is closest to said predicted location and closest in size to an individual tracked said pixel cluster is selected as a best fit, and wherein if a valid candidate said pixel cluster is found, that said candidate pixel cluster is represented as a said bat and is added to tracking information on said bat; and locating any matches to said tracked bat on a current said Potential List, wherein for each potential item on said Potential List, a said at least one algorithm searches through said Pixel Cluster List for said current video frame to locate a said pixel cluster that best matches the location and size of said tracked bat, given that said pixel cluster representing the next location for said tracked bat is within a second radius of .delta., and wherein .delta. is an estimate of the maximum distance that said tracked bat is expected to travel based on its size, N.sub.P, an estimated cross-sectional area, A (m.sup.2), a video frame rate, F (Hz), an estimated maximum speed S.sub.m (m/s), and the solid angle, .OMEGA. (steradians), of said pixel represented by: .delta..OMEGA..OMEGA. ##EQU00005## and wherein, using said location of a potential candidate bat said at least one algorithm searches through said pixel clusters on said Pixel Cluster List of said current video frame, calculating distance between locations and the difference in pixel counts, such that if multiple said candidate bats are found within said search radius, .delta., said candidate bat that is closest to said predicted location and closest in size to said pixel cluster from a selected preceding said video frame is selected as a best fit, and wherein if said pixel cluster is found within said search radius, .delta., a new pixel cluster is added to the tracking information for a potential said bat and moved to said Tracking List; transferring the remaining said pixel clusters to said Potential List and removing said pixel clusters that are currently on said Potential List before said remaining pixel clusters are added, wherein potential said bats are viewed only for a single video frame cycle, and wherein if a match for a said pixel cluster is not found, said un-matched pixel cluster remains unclassified, and wherein to minimize false said tracks, said pixel cluster must exceed a user-specified minimum size; adding said pixel clusters that exceed said user-specified minimum size to said Potential List for said current video frame; identifying said objects on said Tracking List that have not had any recent track updates, said user specifying a number of consecutive said video frames that may elapse without an update to said track such that when said specified number of consecutive said video frames is reached a said track is considered lost; removing said lost track from said Tracking List and either discarding said lost track or adding said lost track to said Target List; specifying a minimum length of said track that must be reached for said bat to be accepted for said Target List; adding said track to said Target List once said track is accepted either as a continuation of a previous said track or as a new said track; classifying said accepted track as said continuation of a previous track if said track had been obscured by a configuration in said FOV of said at least one digital imaging device; establishing user-specified criteria for concatenation; verifying said classification as said continuation of a previous track by insuring terminal points of existing said tracked bats in a preceding said video frame meet said user-specified criteria for concatenation; performing a final verification check after all said video frames in a said video sequence have been processed through said at least one algorithm; and specifying a minimum travel distance, d, that each said tracked bat must traverse to be considered valid; computing a smallest enclosing rectangle for said track of said bat; computing the hypotenuse of said smallest enclosing rectangle, and comparing said hypotenuse with said minimum travel distance, d.

7. The method of claim 6, computing said pixel solid angle, .OMEGA., from horizontal and vertical FOVs (h.sub.FOV and v.sub.FOV, in degrees) and number of said pixels within said FOV (h.sub.Pixels and v.sub.pixels), by implementing one said at least one algorithm as: .OMEGA..pi..pi. ##EQU00006##

8. The method of claim 5, said criteria at least comprising: the terminal point of a said track cannot terminate at the edge of the preceding said video frame; the first time of appearance in a successive video frame of a new said track must occur within a reasonable time after that of said terminal point of a said track; and coordinates of said new track must lie within a user-specified distance and angle, .beta., from said terminal point of a said track, wherein if any of said criteria are not met, said track is added as a said new track on said Target List.

9. The method of claim 1, further comprising fixing in position and orientation as said at least one digital video imaging device at least one digital thermal videographic camera, orienting said digital thermal videographic camera such that said candidate bats move in a direction approximately perpendicular to the line of sight of said digital thermal videographic camera, wherein a combination of factors, said factors to include at least velocity of said candidate object, camera FOV, distance to said candidate object, and frame rate, that determines for how many video frames each said candidate bat is within said FOV of said camera permits imaging of a said candidate bat for at least six consecutive frames.

10. A method employing at least a specially configured computer in operable communication with at least one digital imaging device for capturing images of objects in motion, said images processed as video frames, said method enabling simultaneous tracking of multiple said objects in motion, said objects in motion having at least one characteristic of their signature in a range that is, at some times, approximately equal to the range of said characteristic in the signature of a background behind said objects in motion, comprising: a) providing at least some specialized software at least some of which implements at least one algorithm on computer readable storage media in operable communication with said specially configured computer; b) employing at least some said specialized software implementing a first said at least one algorithm to create at least one synthetic adaptive temporal background within the field of view (FOV) of each said at least one digital imaging device, said synthetic adaptive temporal background at least removing clutter, wherein synthetic target-free background images are generated as said at least one synthetic temporal background at a time interval and number of sequential source images specified by a user, and wherein the value for each pixel in said synthetic target-free background is determined by taking the mode of the histogram of values of said pixels for each location within a said video image; c) collecting on at least said computer readable storage media at least said video images of said objects in motion, said video images made available as pixels and pixel clusters arranged in video frames, wherein at least said video images of said objects are sent to said specially configured computer for processing, said processing employing at least some of said specialized software; d) for each said object in motion imaged by said at least one digital imaging device, differencing said pixels in said video frames sequentially by subtracting a current said synthetic temporal background to yield differenced said pixels as a differenced image; e) taking the absolute value of each resultant said differenced image, wherein said absolute value is taken to eliminate the effects of polarity; f) thresholding, to a user-specified threshold, those of said differenced pixels at the tail end of the distribution of said differenced pixels, wherein the location and value of said differenced pixels are saved to a detected pixel report for subsequent processing, and wherein said imaged objects in motion are each identifiable as an individual said pixel cluster of said pixels in said video frame of thresholded differenced pixels, and wherein a track of a said imaged object in motion is established if two said individual pixel clusters representing an individual said object in motion exhibit similar size in two successive video frames of said differenced thresholded pixels; g) applying a standard "region growing" technique to find discrete contiguous said pixel clusters to be associated with each candidate said object in motion, wherein applying said "region growing" algorithm establishes a cluster of contiguous single-polarity pixels that identifies an individual said object in motion, and wherein the center location of said pixel cluster, number of said pixels, and boundary dimensions of said pixel cluster are saved for subsequent processing, and wherein said center location [X, Y] of each said pixel cluster is determined by taking an average of all locations of said pixels within said pixel cluster weighted by a respective absolute difference value; h) establishing said two pixel clusters of a similar size in at least two successive video frames as location pairs; and i) updating and labeling each said location pair as a motion vector in each subsequent said differenced thresholded video frame, wherein said updating is used to predict a next position of each said object in motion; and j) iterating steps c) through i) said video frame-by-said video frame for each said FOV and respective synthetic temporal background to generate an output of individual said tracks of each said object in motion represented in said video frames, wherein said method enables simultaneous tracking of multiple objects in motion, and wherein said objects in motion may have a thermal signature in a range that is approximately equal to the range of the thermal signature of said stationary background in the FOV of each said digital imaging device, and wherein behavior of said objects in motion may be chaotic.

11. The method of claim 10, further enumerating said tracks as either on an emergent track list or on a return track list, comprising: specifying a polygon within each said FOV; classifying each said track that originates on the inside of each said polygon and terminates on the outside of each said polygon as an emergent track; and incrementing by one said emergent list for each said track classified as an emergent track; and classifying each said track that originates on the outside of said polygon and terminates on the inside of said polygon as a return track; and incrementing by one said return list for each said track classified as return track; and labeling any said objects remaining as unclassified.

12. The method of claim 11, further comprising differencing said emergent list and said return list to yield a net flow count.

13. The method of claim 10, said tracking algorithm further comprising: establishing four lists, said lists comprising: a Pixel Cluster List, wherein said Pixel Cluster List is loaded for each selected said video frame; a Potential List, a Tracking List, and a Target List, wherein when said pixel cluster is identified to a specific candidate said object in motion, said pixel cluster is removed from said Pixel Cluster List; and implementing four processes, said processes comprising: Tracking Current Targets, wherein said Tracking Current Targets predicts a search location and radius using a computed motion vector to select a candidate said object in motion in the current video frame that best fits user-specified search criteria, and wherein said Tracking Current Targets process matches said pixel clusters for all said candidate objects in motion on said Tracking List; and Identifying New Targets, wherein said Identifying New Targets process searches for new said objects in motion to track using a said search radius based on size; Identifying New Potential Targets, wherein said Identifying New Potential Targets process clears unmatched said objects in motion from said Potential List, creates new said candidate objects in motion for each said pixel cluster remaining in said Pixel Cluster List and places all unmatched said objects in motion in said current video frame on a potential list for input into a next process for next said video frame, and adds the candidate bats to said Potential List; and Identifying Completed Tracks, wherein said Identifying Completed Tracks process accepts an input from said Potential List, and wherein said Identifying Completed Tracks process identifies any said objects in motion on said Tracking List that have not had any recent track updates and, based on processing rules, discards said objects in motion without recent track updates or moves said objects in motion without recent track updates to said Target List, wherein said tracking algorithm processes detected said objects in motion across said video frames, producing a time and spatial history for each.

14. The method of claim 13, further computing attributes associated with each said object in motion identified to a said pixel cluster, said attributes selected from the group consisting of frame number, time, clock time, number of said pixels in said pixel cluster, height and width (in pixels) of each said pixel cluster, and combinations thereof, wherein said set of attributes is passed forward to said tracking algorithm for each said identified pixel cluster in each said differenced video frame.

15. The method of claim 13, said processes further comprising: calculating a motion vector for each said track by differencing a last two known positions of said object in motion, adjusting said motion vector for the number of said video frames, n, since a previous detection of said object in motion and the number of frames, m, since a most recent detection of said object in motion; computing a search radius by multiplying the magnitude of said motion vector by a user-specified constant, K; predicting a new position for each said current track by computing a predicted position, wherein said predicted position is computed to be where said current track would extend without any deviation in said motion vector associated therewith, and wherein said K is a maneuverability factor representing the ability of said candidate object in motion to change speed and direction and is selected to accommodate deviations in said motion vector, and wherein said predicted position is computed using a current location and said computed motion vector, and wherein said predicted position is computed by summing current coordinates and vector increments, and wherein said radius is set to the maximum of either said computed product or a minimum allowed radius, and wherein said minimum allowed radius accommodates said objects in motion that are at the outer range limit of detection and are moving at a rate slower than expected for said objects; using said predicted location, and said tracking radius while cycling through said Pixel Cluster List of a current said video frame and calculating a distance between said predicted position and a center position of each said pixel cluster in said current video frame, wherein if said distance is within said computed search radius, a difference in pixel counts for each said object in motion in said current video frame is calculated for comparison, and wherein if multiple said objects in motion, each identified as one of said pixel clusters, are found within said search radius, a said search radius that is closest to said predicted location and closest in size to an individual tracked said pixel cluster is selected as a best fit, and wherein if a valid candidate said pixel cluster is found, that said candidate pixel cluster is represented as a said object in motion and is added to tracking information on said object in motion; and locating any matches to said tracked object on a current said Potential List, wherein for each potential item on said Potential List, at least one said algorithm searches through said Pixel Cluster List for said current video frame to locate a said pixel cluster that best matches the location and size of said tracked object in motion, given that said pixel cluster representing the next location for said tracked object in motion is within a second radius of .delta., and wherein .delta. is an estimate of the maximum distance that said tracked object in motion is expected to travel based on its size, N.sub.P, an estimated cross-sectional area, A (m.sup.2), a video frame rate, F (Hz), an estimated maximum speed S.sub.m (m/s), and the solid angle, .OMEGA. (steradians), of said pixel represented by: .delta..OMEGA..OMEGA. ##EQU00007## and wherein, using said location of a potential candidate object in motion said algorithm searches through said pixel clusters on said Pixel Cluster List of said current video frame, calculating distance between locations and the difference in pixel counts, such that if multiple said candidate objects in motion are found within said search radius, .delta., said candidate object in motion that is closest to said predicted location and closest in size to said pixel cluster from selected preceding said video frame is selected as a best fit, and wherein if said pixel cluster is found within said search radius, .delta., a new pixel cluster is added to the tracking information for a potential said object in motion and moved to said Tracking List; transferring the remaining said pixel clusters to said Potential List and removing said pixel clusters that are currently on said Potential List before said remaining pixel clusters are added, wherein potential said objects in motion are viewed only for a single video frame cycle, and wherein if a match for a said pixel cluster is not found, said un-matched pixel cluster remains unclassified, and wherein to minimize false said tracks, said pixel cluster must exceed a user-specified minimum size; adding said pixel clusters that exceed said user-specified minimum size to said Potential List for said current video frame; identifying said objects in motion on said Tracking List that have not had any recent track updates, said user specifying a number of consecutive said video frames that may elapse without an update to said track such that when said specified number of consecutive said video frames is reached a said track is considered lost; removing said lost track from said Tracking List and either discarding said lost track or adding said lost track to said Target List; specifying a minimum length of said track that must be reached for said object in motion to be accepted for said Target List; adding said track to said Target List once said track is accepted either as a continuation of a previous said track or as a new said track; classifying said accepted track as said continuation of a previous track if said track had been obscured by a configuration in said FOV of said at least one digital imaging device; establishing user-specified criteria for concatenation; verifying said classification as said continuation of a previous track by insuring terminal points of existing said tracked objects in a preceding said video frame meet said user-specified criteria for concatenation; performing a final verification check after all said video frames in a said video sequence have been processed through said at least one algorithm; specifying a minimum travel distance, d, that each said tracked object in motion must traverse to be considered valid; computing a smallest enclosing rectangle for said track of said object in motion; computing the hypotenuse of said smallest enclosing rectangle, and comparing said hypotenuse with said minimum travel distance, d.

16. The method of claim 15, computing said pixel solid angle, .OMEGA., from horizontal and vertical FOVs (h.sub.FOV and v.sub.FOV, in degrees) and number of said pixels within said FOV (h.sub.Pixels and v.sub.pixels), by implementing one said at least one algorithm as: .OMEGA..pi..pi. ##EQU00008##

17. The method of claim 15, said criteria at least comprising: the terminal point of a said track cannot terminate at the edge of the preceding said video frame; the first time of appearance in a successive video frame of a new said track must occur within a reasonable time after that of said terminal point of a said track; and coordinates of said new track must lie within a user-specified distance and angle, .beta., from said terminal point of a said track, wherein if any of said criteria are not met, said track is added as a said new track on said Target List.

18. The method of claim 13 creating said video frames from said at least one digital imaging device operating in the frequency band selected from the group consisting of infrared light, ultraviolet light, visible light, radio frequencies (RF), acoustic, and combinations thereof.

19. The method of claim 10 further comprising fixing in position and orientation as said at least one digital video imaging device at least one digital thermal videographic camera, orienting said digital thermal videographic camera such that candidate said objects in motion move in a direction approximately perpendicular to the line of sight of said digital thermal videographic camera, wherein a combination of factors, said factors to include at least velocity of said candidate object, camera FOV, distance to said candidate object, and frame rate, that determines for how many video frames each said candidate object is within said FOV of said camera permits imaging of a said candidate object in motion for at least six consecutive frames.

20. A system enabling simultaneous tracking of multiple objects in motion, candidate said objects in motion having at least one characteristic of their signature in a range that is approximately equal to the range of said characteristic in the signature of an established temporal background behind said objects in motion, comprising: at least one tripod; computer readable memory storage media, at least some of said computer readable memory storage media containing at least specialized software implementing at least one specially adapted algorithm; a specially configured computer in operable communication with said computer readable memory storage media; at least one digital imaging device, each said at least one imaging device in operable communication with one said at least one tripod, said at least one digital imaging device for capturing images of said multiple objects in motion that may be processed as video frames, said at least one digital imaging device in operable communication with said at least one specially configured computer, wherein said images are collected on at least some of said computer readable memory storage media, said images made available as pixels that may be arranged as pixel clusters in said video frames, and wherein said specially configured computer processes said captured images by employing at least one said algorithm, and wherein a first said at least one algorithm is applied so that for each said candidate object in motion, said pixels are differenced in said video frames sequentially by subtracting an adaptive temporal background, said subtracting at least removing clutter from said differenced frame, a second said at least one algorithm further enabling thresholding to remove those of said pixels at the tail ends of the distribution of said differenced pixels, resulting in said candidate objects in motion appearing as the only pixel clusters in said thresholded differenced video frame, and further establishing a track of one said candidate object in motion if two said pixel clusters exhibit similar size in a successive frame processed after said initial differenced thresholded video frame, and wherein said two pixel clusters of a similar size are then referred to as location pairs, and wherein said location pairs define a motion vector that is updated in each subsequent said differenced thresholded video frame to predict a next position of said candidate object in motion, and wherein each said at least one algorithm is iterated for successive said video frames to generate an output of at least individual said tracks of each said candidate object in motion that is represented in said differenced thresholded video frames, wherein each said candidate object in motion identified to an individual track that originates on the inside of a said pre-specified polygon and terminates on the outside of said polygon is classified as an emergent track, and wherein an emergence tally is incremented by one for each said candidate object in motion identified thereby, and wherein each said candidate object in motion that originates on the outside of said polygon and terminates on the inside of said polygon is classified as return track, and wherein a return tally is incremented by one for each said candidate object in motion identified thereby, and wherein all other said candidate objects in motion are considered unclassified.

Make Better Decisions: Try a trial or see plans & pricing

Drugs may be covered by multiple patents or regulatory protections. All trademarks and applicant names are the property of their respective owners or licensors. Although great care is taken in the proper and correct provision of this service, thinkBiotech LLC does not accept any responsibility for possible consequences of errors or omissions in the provided data. The data presented herein is for information purposes only. There is no warranty that the data contained herein is error free. thinkBiotech performs no independent verification of facts as provided by public sources nor are attempts made to provide legal or investing advice. Any reliance on data provided herein is done solely at the discretion of the user. Users of this service are advised to seek professional advice and independent confirmation before considering acting on any of the provided information. thinkBiotech LLC reserves the right to amend, extend or withdraw any part or all of the offered service without notice.