Fast extraction of polyhedral model silhouettes from moving viewpoint on curved trajectory
Introduction
In computer graphics, researchers have been giving intensive attention to the efficient silhouette computation, as silhouettes are essential for a wide range of applications, such as non-photorealistic rendering, shadow computation, and backface culling. Since the silhouette of a model is the most important visual cue for recognizing its shape, even in the case of model simplification, the silhouette of the original model is used as the simplified model to look more realistic [1].
When silhouette extraction is used in computer animation, the camera position is considered as the viewpoint and camera path as the viewpoint trajectory. The camera path is usually given as a curve, so the corresponding viewpoint also moves along a curved trajectory, which makes the efficiency of the silhouette detection algorithm more critical than in the case of a fixed viewpoint. Fast silhouette extraction is also important in geometric modeling systems, where swept volumes are often used to create objects. The silhouettes of a model from a moving viewpoint can be used as a starting point for the swept volume computation [2], [3]. If sequential silhouettes of a model are extracted in real-time, the swept volume can then be computed in pseudo real-time.
Isenberg et al. [4] classified silhouette detection algorithms into object space algorithms [1], [5], [6], [7], [8], image space algorithms [9], [10], and hybrid algorithms [11], [12], [13]. Among them, object space algorithms have the advantages of computing silhouettes in an analytic form, obtaining exact silhouettes, and convenient stylization or further processing of the silhouettes. While there are many efficient algorithms that compute silhouettes from a fixed viewpoint, there are relatively few algorithms applicable to a moving viewpoint. Accordingly, this paper presents a silhouette detection algorithm for a polyhedral model from a moving viewpoint in object space.
Depending on the viewpoint, there are two types of silhouette: a perspective silhouette and parallel silhouette. For the case of polyhedral models, both types of silhouette can be defined as a set of model edges with a special property. Let E be an edge of the polyhedral model, and and be two facets sharing E. As such, edge E is in a perspective silhouette from viewpoint if and only if the facets and are at the same side of the plane containing viewpoint and edge E. To compute a parallel silhouette, a view direction is given, which is an idealized viewpoint at an infinite distance from the model. As such, edge E is contained in the parallel silhouette with respect to the view direction if and only if the facets and are at the same side of the plane containing edge E and parallel to vector . Usually, the computation of a parallel silhouette is easier than that of a perspective silhouette. Consequently, this paper considers the problems involved with computing perspective silhouettes, although the proposed algorithms can also be extended with minor modifications to compute parallel silhouettes. Thus, in the remainder of this paper, silhouette implies a perspective silhouette.
Let be the viewpoint trajectory, where t is the parameter denoting time. To compute sequential silhouettes from viewpoints on , one intuitive way is to repeatedly apply a silhouette extraction algorithm from a fixed viewpoint, while changing the position of the viewpoint. However, this technique ignores the temporal coherence in sequential silhouettes. Few differences can be expected between silhouettes from two close viewpoints and . If one of the model edges is contained in the silhouette from viewpoint , there is a high possibility that the same edge will also be in the silhouette from viewpoint . Thus, when assuming that the viewpoint moves with respect to time, a particular model edge should be contained in a silhouette for a time-interval rather than a set of discrete time points.
Two efficient algorithms are presented to extract a sequence of silhouettes for a polyhedral model from a moving viewpoint: (i) an incremental update algorithm and (ii) search-based algorithm. Both algorithms require the preprocessing of computing time-intervals for an edge to be included in the silhouette. Let the time-intervals be denoted as silhouette time-intervals. It is also assumed that the viewpoint moves along a trajectory . Then, the end points of the silhouette time-intervals can be computed for each edge E by intersecting with the supporting planes of the two facets that share E. When the degree of is n, the end points of the time-intervals can be computed by solving two equations of degree n. For a model edge, the number of silhouette time-intervals is at most .
The incremental update algorithm is considered first. When extracting f frames of silhouettes from a sequence of viewpoints , , a data structure needs to be constructed that manages the information on the beginning and ending of edge inclusion in a silhouette from each viewpoint . By using this data structure, a silhouette can then be extracted at each time point by adding and deleting the appropriate edges from the previous silhouette. When computing the silhouette from viewpoint , the edges in the previous silhouette are visited, then the appropriate edges are added or deleted just once, making this an optimal algorithm.
In contrast to the above incremental update algorithm, the search-based algorithm directly uses the set of silhouette time-intervals with the corresponding edge information, such as , where b and e are the beginning and ending time points, respectively, for edge E to be included in the silhouette. Then, the silhouette from viewpoint can be extracted by searching the intervals containing . To implement the search-based algorithm efficiently, an interval tree is used as the main data structure.
When the length of the viewpoint trajectory exceeds a threshold, the incremental update algorithm produces a better performance than the search-based algorithm. Thus, the current experiments are focused on showing the performance of the incremental update algorithm compared to previous work. The contributions of the proposed incremental update algorithm are summarized as follows:
It is a novel algorithm that defines and uses silhouette time-intervals for computing the sequence of object space silhouettes. Based on preprocessed silhouette time-intervals, the incremental update algorithm can extract a silhouette sequence in an optimal way. Experiments show that the performance of the incremental update algorithm is better than that in the previous work.
Various efficient silhouette extraction algorithms already exist, such as [1], [8], yet they require heuristics that make the implementation difficult and take a relatively long time. Thus, when compared to such algorithms, the incremental update algorithm is analytic, plus it is simple and easy to implement.
The algorithms in [6], [7] do not require heuristics and are easy to implement. However, these algorithms do not always guarantee the extraction of exact silhouettes from a moving viewpoint, as they require the selection of appropriate starting points to trace the silhouette for each frame. Thus, when compared to these algorithms, the incremental update algorithm always provides accurate whole silhouettes.
The limitations of the proposed incremental update algorithm are mainly related to its preprocessing requirement:
The algorithm requires the computation of silhouette time-intervals whenever a new viewpoint trajectory is given.
The algorithm needs memory space in proportion to the number of silhouette time-intervals.
The remainder of this paper is organized as follows. Section 2 presents related work, then the computation of silhouette time-intervals is explained in Section 3. Section 4 outlines the proposed search-based algorithm with two data structures: an array and interval tree, while Section 5 introduces the proposed incremental update algorithm. The performance of both algorithms is analyzed in Section 6, plus the proposed incremental update algorithm is experimentally compared with the anchored cone algorithm proposed by Sander et al. [1] in Section 7. Some final conclusions and areas for future work are given in Section 8.
Section snippets
Related work
The silhouette extraction problem has already been investigated using many different approaches. Thus, according to the classification of Isenberg et al. [4], this paper discusses the trade-offs between existing silhouette detection algorithms based on the following categories: object space, image space, and hybrid algorithms.
Image space algorithms [9], [10], [13] extract silhouettes from a geometric buffer, such as a depth buffer or normal buffer, by detecting the discontinuities in the image.
Computation of silhouette time-intervals
This section presents the derivation of equations to compute the silhouette time-intervals for an edge E in a model. It is assumed that E is shared by two facets and , and the end vertices of E are and . The outward normal vectors of and are denoted as and , respectively. It is also assumed that , , is the trajectory of a moving viewpoint. Edge E is part of the silhouette from viewpoint , if and only if .
For an arbitrary
Search-based algorithm
This section presents a search-based algorithm for extracting a sequence of silhouettes from a moving viewpoint. Assume the extraction of silhouettes for f frames from a moving viewpoint on , , is considered. The silhouettes are then supposed to be computed from viewpoints , where and . For the polyhedral model , the set of silhouette edges for from viewpoint is denoted as , where j is the frame identification number. The algorithm for
Incremental update algorithm
The basic idea of the proposed incremental update algorithm is to determine when an edge begins or ceases to be a silhouette edge in the preprocessing step. The preprocessing of silhouette time-interval information can facilitate the extraction of sequential silhouettes, and data structures are used to keep the list of edges that need to be added or deleted from the silhouette for each time point , : and .struct EdgeList int Eid; struct EdgeList *link; } *Add [NUM_FRAME],
Performance analysis
The time and space complexity of the brute force algorithm and proposed algorithms are presented in Table 1, where is the number of edges in , is the number of silhouette time-intervals, is the average number of edges in the silhouette, and is the number of frames.
The search-based algorithm using an interval tree has a better time complexity than that using an interval array, yet the preprocessing for the interval tree took considerably longer than that for the interval array. When the
Experimental results
This section compares three algorithms: the proposed incremental update algorithm, an anchored cone algorithm [1], and brute force algorithm. The incremental update algorithm was implemented using C programming and an OpenGL library, then the three algorithms were tested based on computing a sequence of perspective silhouettes for four polyhedral models: , , , and (see Fig. 4) in a PC with a Pentium 4 (2.80 GHz) processor and 1 GB RAM.
The models were normalized within a
Conclusions
Algorithms were presented for the fast extraction of a sequence of perspective silhouettes for a polyhedral model from a moving viewpoint. The viewpoint is assumed to move along a trajectory that is a space curve of the time parameter . As such, time-intervals are determined when each edge of the model is included in the silhouette based on two major computations: (i) intersecting with two planes and (ii) a number of dot products. If is a curve of degree , then there are at
Acknowledgements
The authors would like to thank Dr. Pedro Sander and his colleagues for providing the source code for the anchored cone algorithm. Dr. Kim was supported by the Korea Research Foundation Grant funded by Korean Government (MOEHRD) (R04-2004-000-10099-0). Dr. Baek was supported by the Kyungpook National University Research Fund, 2004.
Ku-Jin Kim is an Assistant Professor in the Department of Computer Engineering at Kyungpook National University, Republic of Korea. Her research interests include computer graphics, non-photorealistic rendering, and geometric/surface modeling. Prof. Kim received a B.S. degree from Ewha Womans University in 1990, an M.S. degree from KAIST in 1992, and a Ph.D. degree from POSTECH in 1998, all in Computer Science. She was a PostDoctoral fellow at Purdue University in 1998–2000. Prof. Kim also held
References (17)
- et al.
Sweeping of three-dimensional objects
Computer-Aided Design
(1990) An algorithm for high speed curve generation
Computer Graphics and Image Processing
(1974)- et al.
Silhouette clipping
- et al.
Geometric representation of swept volumes with application to polyhedral objects
The International Journal of Robotics Research
(1990) - et al.
A developer's guide to silhouette algorithms for polygonal models
IEEE Computer Graphics and Applications
(2003) - Benichou F, Elber G. Output sensitive extraction of silhouettes from polygonal geometry. In: Proceedings of the pacific...
- et al.
Illustrating smooth surfaces
- et al.
Real-time nonphotorealistic rendering
Cited by (4)
Line drawings from 3D models
2019, Foundations and Trends in Computer Graphics and VisionLine drawings from 3D models: A tutorial
2018, arXivSVGPU: Real time 3D rendering to vector graphics formats
2016, High-Performance Graphics - ACM SIGGRAPH / Eurographics Symposium Proceedings, HPGOptimal accurate minkowski sum approximation of polyhedral models
2008, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Ku-Jin Kim is an Assistant Professor in the Department of Computer Engineering at Kyungpook National University, Republic of Korea. Her research interests include computer graphics, non-photorealistic rendering, and geometric/surface modeling. Prof. Kim received a B.S. degree from Ewha Womans University in 1990, an M.S. degree from KAIST in 1992, and a Ph.D. degree from POSTECH in 1998, all in Computer Science. She was a PostDoctoral fellow at Purdue University in 1998–2000. Prof. Kim also held faculty positions at Ajou University, Korea, and at University of Missouri, St. Louis, USA.
Nakhoon Baek is currently an assistant professor in the School of Electrical Engineering & Computer Science at Kyungpook National University. He received his B.A., M.S., and Ph.D. in Computer Science from the Korea Advanced Institute of Science and Technology in 1990, 1992, and 1997, respectively. His research interests include real-time rendering and computational geometry applications.