7.3. Surfaces Defined by Point Clouds

7.3. Surfaces Defined by Point Clouds

In the past few years , point clouds have had a renaissance caused by the widespread availability of 3D scanning technology (see Figure 7.19). Such devices produce a huge number of points, each of which lies on the surface of the real object in the scanner (see Figure 7.19). These sets of points are called point clouds . For one object, there can be tens of millions of points. Usually, they are not ordered (or only partially ordered) and contain some noise.

image from book
Figure 7.19: 3D scanners (left) produce large point clouds (right) from real objects (middle).

In order to render [Pfister et al. 00, Rusinkiewicz and Levoy 00, Zwicker et al. 02, Bala et al. 03] and interact [Klein and Zachmann 04a] with objects represented as point clouds, one must define an appropriate surface (even if it is not explicitly reconstructed).

This definition should produce a surface as close to the original surface as possible while being robust against noise (introduced by the scanning process). At the same time, it should allow the fastest possible rendering and interaction with the object.

In the following, we present a fairly simple definition [Klein and Zachmann 04b].

7.3.1. Implicit Surface Model

The surface definition begins with weighted least squares interpolation (WLS).

Let a point cloud image from book with N points p i ˆˆ image from book d be given. Then an appealing definition of the surface from image from book is the zero set S = { x f ( x ) = 0} of an implicit function [Adamson and Alexa 03]

(7.2)  image from book

where a ( x ) is the weighted average of all points image from book ,

(7.3)  image from book

Usually, a Gaussian kernel (weight function),

(7.4)  image from book

is used, but other kernels work as well.

The bandwidth of the kernel, h , allows us to tune the decay of the influence of the points. It should be chosen such that no holes appear [Klein and Zachmann 04a].

Theoretically, s support is unbounded. However, it can be safely limited to the extent where it falls below the machines precision, or some other suitably small threshold . Alternatively, one could use the cubic polynomial [Lee 00]

image from book

or the tricube weight function [Cleveland and Loader 95]

image from book

or the Wendland function [Wendland 95]

image from book

all of which are set to 0 for d > h and, thus, have compact support (see Figure 7.20 for a comparison). However, the choice of kernel function is not critical [Hrdle 90].

image from book
Figure 7.20: Different weight functions (kernels). Note that the horizontal scale of the Gauss curve is different, so that the qualitative shape of the curves can be compared better.

The normal n ( x ) is determined by weighted least squares. It is defined as the direction of smallest weighted covariance, i. e., it minimizes

(7.5)  image from book

for fixed x and under the constraint n ( x ) = 1.

Note that, unlike [Adamson and Alexa 03], we use a ( x ) as the center of the principal component analysis (PCA), which makes f ( x ) much more well behaved (see Figure 7.21). Also, we do not solve a minimization problem like [Levin 03, Alexa et al. 03], because we are aiming at an extremely fast method.

image from book
Figure 7.21: Visualization of the implicit function f(x) over a 2D point cloud. Points x ˆˆ image from book 2 with f(x) ‰ˆ 0, i.e., points on or close to the surface, are shown magenta . Red denotes f(x) 0, and blue denotes f(x) 0. (a) point cloud; (b) reconstructed surface using the definition of [Adamson and Alexa 03]; (c) utilizing the centered covariance matrix produces a better surface, but it still has several artifacts; (d) surface and function f(x) based on the SIG. (See Color Plate XVII.) (Courtesy of Elsevier.)

The normal n ( x ) defined by Equation (7.5) is the smallest eigenvector of the centered covariance matrix B = ( b ij ) with

(7.6)  image from book

There are several variations of this simple definition, but for sake of clarity, we will stay with this basic one.

7.3.2. Euclidean Kernel

This simple defintion (and many of its variants) has several problems. One problem is that the Euclidean distance x ˆ p , p ˆˆ image from book , can be small, while the distance from x to the closest point on S and then along the shortest path to p on S (the geodesic) is quite large.

This can produce artifacts in the surface S (see Figure 7.21). Two typical cases are as follows . First, assume x is halfway between two (possibly unconnected) components of the point cloud. Then it is still influenced by both parts of the point cloud, which have similar weights in Equations (7.3) and (7.5). This can lead to an artificial zero subset S , where there are no points from image from book at all. Second, let us assume that x is inside a cavity of the point cloud. Then a ( x ) gets drawn closer to x than if the point cloud was flat. This makes the zero set biased towards the outside of the cavity, away from the true surface. In the extreme, this can lead to cancellation near the center of a spherical point cloud, where all points on the sphere have a similar weight.

This thwarts algorithms based solely on the point-cloud representation, such as collision detection [Klein and Zachmann 04a] or ray tracing [Adamson and Alexa 04].

The problems just mentioned could be alleviated somewhat by restricting the surface to the region { x : x ˆ a ( x ) < c } (since a ( x ) must stay within the convex hull of image from book ). However, this does not help in many cases involving cavities.

7.3.3. Geodesic Distance Approximation

As illustrated in the preceding section, the main problems are caused by a distance function that does not take the topology of S into account. Therefore, we will try to approximate the geodesic distances on the surface S . Unfortunately, we do not have an explicit reconstruction of S , and in many applications, we do not even want to construct one. Instead, we use a geometric proximity graph where the nodes are points ˆˆ image from book .

In principle, we could use any proximity graph, but the SIG seems to yield the best results.

We define a new distance function d geo ( x , p ) as follows. Given some location x , we compute its nearest neighbor image from book ˆˆ image from book . Then we compute the closest point image from book to x that lies on an edge adjacent to image from book . Now, conceptually, the distance from x to any p ˆˆ image from book could be defined as

image from book

where d ( p * , p ) for any p ˆˆ image from book is the accumulated length of the shortest path from p * to p , multiplied by the number of hops along the path (see Figure 7.22, left image). However, it is not obvious that this is always the desired distance. In addition, it is desirable to define the distance function with as few discontinuities as possible. Therefore, we take the weighted average (see Figure 7.22, right image)

(7.7)  image from book

with the interpolation parameter a = image from book ˆ image from book .

image from book
Figure 7.22: A proximity graph can help to improve the quality of the implicit surface by approximation of the geodesic distance instead of the Euclidean distance. (Courtesy of Elsevier.)

Note that we do not add x ˆ image from book . That way, f ( x ) is non-zero even far away from the point cloud.

Of course, there are still discontinuities in d geo , and thus in function f . These can occur at the borders of the Voronoi regions of the cloud points, in particular at borders where the Voronoi sites are far apart from each other, such as points close to the medial axis.

The rationale for multiplying the path length by the number of hops is the following: if an (indirect) neighbor p is reached by a shortest path with many hops, then there are many points in image from book that should be weighted much more than p , even if the Euclidean distance p * ˆ p is small. This is independent of the concrete proximity graph used for computing the shortest paths.

Overall, when computing f by Equations (7.2) to (7.6), we use d geo in Equation (7.4).

7.3.4. Automatic Bandwidth Computation

Another problem of the simple surface definition (without proximity graph) is the bandwidth h in Equation (7.4), which is a critical parameter.

On the one hand, if h is chosen too small, then the variance may be too large, i. e., noise, holes, or other artifacts may appear in the surface. On the other hand, if it is chosen too large, then the bias may be too large, i. e., small features in the surface will be smoothed out. To overcome this problem, [Pauly et al. 02] proposed to scale the parameter h adaptively.

Here, we can use the proximity graph (e. g., a SIG) to estimate the local sampling density, r ( x ), and then determine h accordingly . Thus, h itself is a function h = h ( x ).

Let r 1 and r 2 be the lengths of the longest edges incident to image from book and image from book , respectively (see Figure 7.22). Then, we set

(7.8)  image from book
(7.9)  image from book

where is a suitably small value (see Section 7.3.1), and r is the number of nearest neighbors that determine the radius of each sphere of influence. (Note that log < 0 for realistic values of .) Thus, p i with distance r ( x ) from image from book will be assigned a weight of (see Equation (7.4)).

We have now replaced the scale- and sampling-dependent parameter h by another one, , that is independent of scale and sampling density. Often, this can just be set to 1, or it can be used to adjust the amount of smoothing. Note that this automatic bandwidth detection works similarly for many other kernels as well (see Section 7.3.1).

Depending on the application, it might be desirable to involve more and more points in Equation (7.2), so that n ( x ) becomes a least-squares plane over the complete point set image from book as x approaches infinity. In that case, we can just add x ˆ image from book to Equation (7.8).

Figure 7.23 shows that the automatic bandwidth determination allows the WLS approach to handle point clouds with varying sampling densities without any manual tuning. The smoothing of the different sampling densities is very similar, compared to the scale (i. e., density). Notice the fine detail in the range from the tip of the nose throughout the chin, and the relatively sparse sampling in the area of the skull and at the bottom.

image from book
Figure 7.23: Reconstructed surface based on simple WLS and with proximity graph (rightmost) for a noisy point cloud obtained from the 3D Max Planck model (leftmost). Notice how fine details, as well as sparsely sampled areas, are handled without manual tuning. (See Color Plate XVIII.) (Courtesy of Elsevier.)

7.3.5. Automatic Boundary Detection

Another benefit of the automatic sampling density estimation is a very simple boundary detection method. This method builds on the one proposed by [Adamson and Alexa 04].

The idea is simply to discard points x with f ( x ) = 0, if they are too far away from a ( x ) relative to the sampling density in the vicinity of a ( x ). More precisely, we define a new implicit function

(7.10)  image from book

Figure 7.24, right image, shows that this simple method is able to handle different sampling densities fairly well.

image from book
Figure 7.24: Automatic sampling density estimation for each point allows you to determine the bandwidth automatically and independently of scale and sampling density (middle), and to detect boundaries automatically (right). (See Color Plate XIX.) (Courtesy of Elsevier.)

7.3.6. Complexity of Function Evaluation

The geodesic kernel needs to determine the nearest neighbor p * of a point x in space (see Section 6.4.1). Using a simple kd-tree , an approximate nearest neighbor in 3D can be found in O (log 3 N ) [Arya et al. 98]. [4]

[Klein and Zachmann 04b] shows that all points p i influencing x can be determined in constant time by a depth-first or breadth-first search through the graph.

Overall, f ( x ) can be determined in O (log 3 N ).

To achieve a fast practical function evaluation, we also need to compute the smallest eigenvector quickly [Klein and Zachmann 04a]. First, we compute the smallest eigenvalue 1 by determining the three roots of the cubic characteristic polynomial det( B ˆ I ) of the covariance matrix B [Press et al. 92]. Then, we compute the associated eigenvector using the Cholesky decomposition of B ˆ 1 I . [5]

In our experience, this method is faster than the Jacobi method by a factor of 4, and it is faster than singular value decomposition by a factor 8.

[4] Under mild conditions, nearest neighbor can be done in O (log N ) time by using a Delaunay hierarchy [Devillers 02], but this may not always be practical.

[5] The second step is possible because B ˆ I is positive semi-definite. Let ( A ) = { 1 , 2 , 3 } with 0 < 1 2 3 . Then, ( B ˆ I ) = {0 , 2 ˆ 1 , 3 ˆ 1 }. Let B ˆ I = USU T be the singular value decomposition. Then, x T ( B ˆ I ) x = x T USU T x = y T S y 0. Thus, the Cholesky decomposition can be performed if full pivoting is done [Higham 90].



Geometric Data Structures for Computer Graphics2006
Geometric Data Structures for Computer Graphics2006
ISBN: N/A
EAN: N/A
Year: 2005
Pages: 69

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net