Elsevier

Computers & Graphics

Volume 29, Issue 3, June 2005, Pages 393-402
Computers & Graphics

Fast extraction of polyhedral model silhouettes from moving viewpoint on curved trajectory

https://doi.org/10.1016/j.cag.2005.03.009Get rights and content

Abstract

The efficient extraction of model silhouettes is essential in many applications, such as non-photorealistic rendering, backface culling, shadow computation, and computing swept volumes. For dynamically moving viewpoints, efficient silhouette extraction is more important for system performance. Accordingly, this paper presents an incremental update algorithm for computing a perspective silhouette sequence for a polyhedral model. The viewpoint is assumed to move along a given trajectory q(t), where t is the time parameter. As the preprocessing step, the time-intervals during which each model edge is contained in the silhouette, defined as silhouette time-intervals, are computed using two major computations: (i) intersecting q(t) with two planes and (ii) a number of dot products. If q(t) is a curve of degree n, there are at most n+1 silhouette time-intervals for an individual edge. The silhouette time-intervals are then used to determine the edges that should be added or deleted from the previous silhouette for each discrete viewpoint, thereby providing an optimal way to compute a sequence of silhouettes. A search-based algorithm is also presented that extracts the silhouette edges for each time point tj by searching the silhouette time-intervals containing tj. The performance of the proposed algorithms is analyzed and experimental results are compared with those for the anchored cone algorithm suggested by Sander et al. [In: Akeley K, editor. Siggraph 2000, Computer Graphics Proceedings. Annual Conference Series. New York/Reading, MA/New York: ACM Press/ACM SIGGRAPH/Addison-Wesley/Longman; 2000. p. 327–34.]

Introduction

In computer graphics, researchers have been giving intensive attention to the efficient silhouette computation, as silhouettes are essential for a wide range of applications, such as non-photorealistic rendering, shadow computation, and backface culling. Since the silhouette of a model is the most important visual cue for recognizing its shape, even in the case of model simplification, the silhouette of the original model is used as the simplified model to look more realistic [1].

When silhouette extraction is used in computer animation, the camera position is considered as the viewpoint and camera path as the viewpoint trajectory. The camera path is usually given as a curve, so the corresponding viewpoint also moves along a curved trajectory, which makes the efficiency of the silhouette detection algorithm more critical than in the case of a fixed viewpoint. Fast silhouette extraction is also important in geometric modeling systems, where swept volumes are often used to create objects. The silhouettes of a model from a moving viewpoint can be used as a starting point for the swept volume computation [2], [3]. If sequential silhouettes of a model are extracted in real-time, the swept volume can then be computed in pseudo real-time.

Isenberg et al. [4] classified silhouette detection algorithms into object space algorithms [1], [5], [6], [7], [8], image space algorithms [9], [10], and hybrid algorithms [11], [12], [13]. Among them, object space algorithms have the advantages of computing silhouettes in an analytic form, obtaining exact silhouettes, and convenient stylization or further processing of the silhouettes. While there are many efficient algorithms that compute silhouettes from a fixed viewpoint, there are relatively few algorithms applicable to a moving viewpoint. Accordingly, this paper presents a silhouette detection algorithm for a polyhedral model from a moving viewpoint in object space.

Depending on the viewpoint, there are two types of silhouette: a perspective silhouette and parallel silhouette. For the case of polyhedral models, both types of silhouette can be defined as a set of model edges with a special property. Let E be an edge of the polyhedral model, and F1 and F2 be two facets sharing E. As such, edge E is in a perspective silhouette from viewpoint q if and only if the facets F1 and F2 are at the same side of the plane containing viewpoint q and edge E. To compute a parallel silhouette, a view direction is given, which is an idealized viewpoint at an infinite distance from the model. As such, edge E is contained in the parallel silhouette with respect to the view direction q if and only if the facets F1 and F2 are at the same side of the plane containing edge E and parallel to vector q. Usually, the computation of a parallel silhouette is easier than that of a perspective silhouette. Consequently, this paper considers the problems involved with computing perspective silhouettes, although the proposed algorithms can also be extended with minor modifications to compute parallel silhouettes. Thus, in the remainder of this paper, silhouette implies a perspective silhouette.

Let q(t) be the viewpoint trajectory, where t is the parameter denoting time. To compute sequential silhouettes from viewpoints on q(t), one intuitive way is to repeatedly apply a silhouette extraction algorithm from a fixed viewpoint, while changing the position of the viewpoint. However, this technique ignores the temporal coherence in sequential silhouettes. Few differences can be expected between silhouettes from two close viewpoints q(ti) and q(ti+ε). If one of the model edges is contained in the silhouette from viewpoint q(ti), there is a high possibility that the same edge will also be in the silhouette from viewpoint q(ti+ε). Thus, when assuming that the viewpoint moves with respect to time, a particular model edge should be contained in a silhouette for a time-interval rather than a set of discrete time points.

Two efficient algorithms are presented to extract a sequence of silhouettes for a polyhedral model from a moving viewpoint: (i) an incremental update algorithm and (ii) search-based algorithm. Both algorithms require the preprocessing of computing time-intervals for an edge to be included in the silhouette. Let the time-intervals be denoted as silhouette time-intervals. It is also assumed that the viewpoint moves along a trajectory q(t). Then, the end points of the silhouette time-intervals can be computed for each edge E by intersecting q(t) with the supporting planes of the two facets that share E. When the degree of q(t) is n, the end points of the time-intervals can be computed by solving two equations of degree n. For a model edge, the number of silhouette time-intervals is at most n+1.

The incremental update algorithm is considered first. When extracting f frames of silhouettes from a sequence of viewpoints q(tj), 0j<f, a data structure needs to be constructed that manages the information on the beginning and ending of edge inclusion in a silhouette from each viewpoint q(tj). By using this data structure, a silhouette can then be extracted at each time point tj by adding and deleting the appropriate edges from the previous silhouette. When computing the silhouette from viewpoint q(tj), the edges in the previous silhouette q(tj-1) are visited, then the appropriate edges are added or deleted just once, making this an optimal algorithm.

In contrast to the above incremental update algorithm, the search-based algorithm directly uses the set of silhouette time-intervals with the corresponding edge information, such as (b,e,E), where b and e are the beginning and ending time points, respectively, for edge E to be included in the silhouette. Then, the silhouette from viewpoint q(tj) can be extracted by searching the intervals containing tj. To implement the search-based algorithm efficiently, an interval tree is used as the main data structure.

When the length of the viewpoint trajectory exceeds a threshold, the incremental update algorithm produces a better performance than the search-based algorithm. Thus, the current experiments are focused on showing the performance of the incremental update algorithm compared to previous work. The contributions of the proposed incremental update algorithm are summarized as follows:

  • It is a novel algorithm that defines and uses silhouette time-intervals for computing the sequence of object space silhouettes. Based on preprocessed silhouette time-intervals, the incremental update algorithm can extract a silhouette sequence in an optimal way. Experiments show that the performance of the incremental update algorithm is better than that in the previous work.

  • Various efficient silhouette extraction algorithms already exist, such as [1], [8], yet they require heuristics that make the implementation difficult and take a relatively long time. Thus, when compared to such algorithms, the incremental update algorithm is analytic, plus it is simple and easy to implement.

  • The algorithms in [6], [7] do not require heuristics and are easy to implement. However, these algorithms do not always guarantee the extraction of exact silhouettes from a moving viewpoint, as they require the selection of appropriate starting points to trace the silhouette for each frame. Thus, when compared to these algorithms, the incremental update algorithm always provides accurate whole silhouettes.

The limitations of the proposed incremental update algorithm are mainly related to its preprocessing requirement:

  • The algorithm requires the computation of silhouette time-intervals whenever a new viewpoint trajectory is given.

  • The algorithm needs memory space in proportion to the number of silhouette time-intervals.

The remainder of this paper is organized as follows. Section 2 presents related work, then the computation of silhouette time-intervals is explained in Section 3. Section 4 outlines the proposed search-based algorithm with two data structures: an array and interval tree, while Section 5 introduces the proposed incremental update algorithm. The performance of both algorithms is analyzed in Section 6, plus the proposed incremental update algorithm is experimentally compared with the anchored cone algorithm proposed by Sander et al. [1] in Section 7. Some final conclusions and areas for future work are given in Section 8.

Section snippets

Related work

The silhouette extraction problem has already been investigated using many different approaches. Thus, according to the classification of Isenberg et al. [4], this paper discusses the trade-offs between existing silhouette detection algorithms based on the following categories: object space, image space, and hybrid algorithms.

Image space algorithms [9], [10], [13] extract silhouettes from a geometric buffer, such as a depth buffer or normal buffer, by detecting the discontinuities in the image.

Computation of silhouette time-intervals

This section presents the derivation of equations to compute the silhouette time-intervals for an edge E in a model. It is assumed that E is shared by two facets F1 and F2, and the end vertices of E are p1 and p2. The outward normal vectors of F1 and F2 are denoted as N1 and N2, respectively. It is also assumed that q(t), tminttmax, is the trajectory of a moving viewpoint. Edge E is part of the silhouette from viewpoint q(t), if and only if ((q(t)-p1)·N1)((q(t)-p1)·N2)0.

For an arbitrary

Search-based algorithm

This section presents a search-based algorithm for extracting a sequence of silhouettes from a moving viewpoint. Assume the extraction of silhouettes for f frames from a moving viewpoint on q(t), tminttmax, is considered. The silhouettes are then supposed to be computed from viewpoints q(tj), where tj[tmin,tmax] and j=0,1,2,,f. For the polyhedral model P, the set of silhouette edges for P from viewpoint q(tj) is denoted as S(j), where j is the frame identification number. The algorithm for

Incremental update algorithm

The basic idea of the proposed incremental update algorithm is to determine when an edge begins or ceases to be a silhouette edge in the preprocessing step. The preprocessing of silhouette time-interval information can facilitate the extraction of sequential silhouettes, and data structures are used to keep the list of edges that need to be added or deleted from the silhouette for each time point tj, 0jf: Add[j] and Delete[j].

struct EdgeList {
 int Eid;
 struct EdgeList *link;
} *Add [NUM_FRAME],

Performance analysis

The time and space complexity of the brute force algorithm and proposed algorithms are presented in Table 1, where N is the number of edges in P, k is the number of silhouette time-intervals, M is the average number of edges in the silhouette, and f is the number of frames.

The search-based algorithm using an interval tree has a better time complexity than that using an interval array, yet the preprocessing for the interval tree took considerably longer than that for the interval array. When the

Experimental results

This section compares three algorithms: the proposed incremental update algorithm, an anchored cone algorithm [1], and brute force algorithm. The incremental update algorithm was implemented using C programming and an OpenGL library, then the three algorithms were tested based on computing a sequence of perspective silhouettes for four polyhedral models: bunny, hand, gargoyle, and parasaur (see Fig. 4) in a PC with a Pentium 4 (2.80 GHz) processor and 1 GB RAM.

The models were normalized within a

Conclusions

Algorithms were presented for the fast extraction of a sequence of perspective silhouettes for a polyhedral model from a moving viewpoint. The viewpoint is assumed to move along a trajectory q(t) that is a space curve of the time parameter t. As such, time-intervals are determined when each edge of the model is included in the silhouette based on two major computations: (i) intersecting q(t) with two planes and (ii) a number of dot products. If q(t) is a curve of degree n, then there are at

Acknowledgements

The authors would like to thank Dr. Pedro Sander and his colleagues for providing the source code for the anchored cone algorithm. Dr. Kim was supported by the Korea Research Foundation Grant funded by Korean Government (MOEHRD) (R04-2004-000-10099-0). Dr. Baek was supported by the Kyungpook National University Research Fund, 2004.

Ku-Jin Kim is an Assistant Professor in the Department of Computer Engineering at Kyungpook National University, Republic of Korea. Her research interests include computer graphics, non-photorealistic rendering, and geometric/surface modeling. Prof. Kim received a B.S. degree from Ewha Womans University in 1990, an M.S. degree from KAIST in 1992, and a Ph.D. degree from POSTECH in 1998, all in Computer Science. She was a PostDoctoral fellow at Purdue University in 1998–2000. Prof. Kim also held

References (17)

  • R. Martin et al.

    Sweeping of three-dimensional objects

    Computer-Aided Design

    (1990)
  • G. Chaikin

    An algorithm for high speed curve generation

    Computer Graphics and Image Processing

    (1974)
  • P.V. Sander et al.

    Silhouette clipping

  • J. Weld et al.

    Geometric representation of swept volumes with application to polyhedral objects

    The International Journal of Robotics Research

    (1990)
  • T. Isenberg et al.

    A developer's guide to silhouette algorithms for polygonal models

    IEEE Computer Graphics and Applications

    (2003)
  • Benichou F, Elber G. Output sensitive extraction of silhouettes from polygonal geometry. In: Proceedings of the pacific...
  • A. Hertzmann et al.

    Illustrating smooth surfaces

  • L. Markosian et al.

    Real-time nonphotorealistic rendering

There are more references available in the full text version of this article.

Cited by (4)

Ku-Jin Kim is an Assistant Professor in the Department of Computer Engineering at Kyungpook National University, Republic of Korea. Her research interests include computer graphics, non-photorealistic rendering, and geometric/surface modeling. Prof. Kim received a B.S. degree from Ewha Womans University in 1990, an M.S. degree from KAIST in 1992, and a Ph.D. degree from POSTECH in 1998, all in Computer Science. She was a PostDoctoral fellow at Purdue University in 1998–2000. Prof. Kim also held faculty positions at Ajou University, Korea, and at University of Missouri, St. Louis, USA.

Nakhoon Baek is currently an assistant professor in the School of Electrical Engineering & Computer Science at Kyungpook National University. He received his B.A., M.S., and Ph.D. in Computer Science from the Korea Advanced Institute of Science and Technology in 1990, 1992, and 1997, respectively. His research interests include real-time rendering and computational geometry applications.

View full text