| ||
In the past few years , point clouds have had a renaissance caused by the widespread availability of 3D scanning technology (see Figure 7.19). Such devices produce a huge number of points, each of which lies on the surface of the real object in the scanner (see Figure 7.19). These sets of points are called point clouds . For one object, there can be tens of millions of points. Usually, they are not ordered (or only partially ordered) and contain some noise.
In order to render [Pfister et al. 00, Rusinkiewicz and Levoy 00, Zwicker et al. 02, Bala et al. 03] and interact [Klein and Zachmann 04a] with objects represented as point clouds, one must define an appropriate surface (even if it is not explicitly reconstructed).
This definition should produce a surface as close to the original surface as possible while being robust against noise (introduced by the scanning process). At the same time, it should allow the fastest possible rendering and interaction with the object.
In the following, we present a fairly simple definition [Klein and Zachmann 04b].
The surface definition begins with weighted least squares interpolation (WLS).
Let a point cloud with N points p i ˆˆ d be given. Then an appealing definition of the surface from is the zero set S = { x f ( x ) = 0} of an implicit function [Adamson and Alexa 03]
(7.2) |
where a ( x ) is the weighted average of all points ,
(7.3) |
Usually, a Gaussian kernel (weight function),
(7.4) |
is used, but other kernels work as well.
The bandwidth of the kernel, h , allows us to tune the decay of the influence of the points. It should be chosen such that no holes appear [Klein and Zachmann 04a].
Theoretically, s support is unbounded. However, it can be safely limited to the extent where it falls below the machines precision, or some other suitably small threshold . Alternatively, one could use the cubic polynomial [Lee 00]
or the tricube weight function [Cleveland and Loader 95]
or the Wendland function [Wendland 95]
all of which are set to 0 for d > h and, thus, have compact support (see Figure 7.20 for a comparison). However, the choice of kernel function is not critical [Hrdle 90].
The normal n ( x ) is determined by weighted least squares. It is defined as the direction of smallest weighted covariance, i. e., it minimizes
(7.5) |
for fixed x and under the constraint n ( x ) = 1.
Note that, unlike [Adamson and Alexa 03], we use a ( x ) as the center of the principal component analysis (PCA), which makes f ( x ) much more well behaved (see Figure 7.21). Also, we do not solve a minimization problem like [Levin 03, Alexa et al. 03], because we are aiming at an extremely fast method.
The normal n ( x ) defined by Equation (7.5) is the smallest eigenvector of the centered covariance matrix B = ( b ij ) with
(7.6) |
There are several variations of this simple definition, but for sake of clarity, we will stay with this basic one.
This simple defintion (and many of its variants) has several problems. One problem is that the Euclidean distance x ˆ p , p ˆˆ , can be small, while the distance from x to the closest point on S and then along the shortest path to p on S (the geodesic) is quite large.
This can produce artifacts in the surface S (see Figure 7.21). Two typical cases are as follows . First, assume x is halfway between two (possibly unconnected) components of the point cloud. Then it is still influenced by both parts of the point cloud, which have similar weights in Equations (7.3) and (7.5). This can lead to an artificial zero subset S , where there are no points from at all. Second, let us assume that x is inside a cavity of the point cloud. Then a ( x ) gets drawn closer to x than if the point cloud was flat. This makes the zero set biased towards the outside of the cavity, away from the true surface. In the extreme, this can lead to cancellation near the center of a spherical point cloud, where all points on the sphere have a similar weight.
This thwarts algorithms based solely on the point-cloud representation, such as collision detection [Klein and Zachmann 04a] or ray tracing [Adamson and Alexa 04].
The problems just mentioned could be alleviated somewhat by restricting the surface to the region { x : x ˆ a ( x ) < c } (since a ( x ) must stay within the convex hull of ). However, this does not help in many cases involving cavities.
As illustrated in the preceding section, the main problems are caused by a distance function that does not take the topology of S into account. Therefore, we will try to approximate the geodesic distances on the surface S . Unfortunately, we do not have an explicit reconstruction of S , and in many applications, we do not even want to construct one. Instead, we use a geometric proximity graph where the nodes are points ˆˆ .
In principle, we could use any proximity graph, but the SIG seems to yield the best results.
We define a new distance function d geo ( x , p ) as follows. Given some location x , we compute its nearest neighbor ˆˆ . Then we compute the closest point to x that lies on an edge adjacent to . Now, conceptually, the distance from x to any p ˆˆ could be defined as
where d ( p * , p ) for any p ˆˆ is the accumulated length of the shortest path from p * to p , multiplied by the number of hops along the path (see Figure 7.22, left image). However, it is not obvious that this is always the desired distance. In addition, it is desirable to define the distance function with as few discontinuities as possible. Therefore, we take the weighted average (see Figure 7.22, right image)
(7.7) |
with the interpolation parameter a = ˆ .
Note that we do not add x ˆ . That way, f ( x ) is non-zero even far away from the point cloud.
Of course, there are still discontinuities in d geo , and thus in function f . These can occur at the borders of the Voronoi regions of the cloud points, in particular at borders where the Voronoi sites are far apart from each other, such as points close to the medial axis.
The rationale for multiplying the path length by the number of hops is the following: if an (indirect) neighbor p is reached by a shortest path with many hops, then there are many points in that should be weighted much more than p , even if the Euclidean distance p * ˆ p is small. This is independent of the concrete proximity graph used for computing the shortest paths.
Overall, when computing f by Equations (7.2) to (7.6), we use d geo in Equation (7.4).
Another problem of the simple surface definition (without proximity graph) is the bandwidth h in Equation (7.4), which is a critical parameter.
On the one hand, if h is chosen too small, then the variance may be too large, i. e., noise, holes, or other artifacts may appear in the surface. On the other hand, if it is chosen too large, then the bias may be too large, i. e., small features in the surface will be smoothed out. To overcome this problem, [Pauly et al. 02] proposed to scale the parameter h adaptively.
Here, we can use the proximity graph (e. g., a SIG) to estimate the local sampling density, r ( x ), and then determine h accordingly . Thus, h itself is a function h = h ( x ).
Let r 1 and r 2 be the lengths of the longest edges incident to and , respectively (see Figure 7.22). Then, we set
(7.8) |
(7.9) |
where is a suitably small value (see Section 7.3.1), and r is the number of nearest neighbors that determine the radius of each sphere of influence. (Note that log < 0 for realistic values of .) Thus, p i with distance r ( x ) from will be assigned a weight of (see Equation (7.4)).
We have now replaced the scale- and sampling-dependent parameter h by another one, , that is independent of scale and sampling density. Often, this can just be set to 1, or it can be used to adjust the amount of smoothing. Note that this automatic bandwidth detection works similarly for many other kernels as well (see Section 7.3.1).
Depending on the application, it might be desirable to involve more and more points in Equation (7.2), so that n ( x ) becomes a least-squares plane over the complete point set as x approaches infinity. In that case, we can just add x ˆ to Equation (7.8).
Figure 7.23 shows that the automatic bandwidth determination allows the WLS approach to handle point clouds with varying sampling densities without any manual tuning. The smoothing of the different sampling densities is very similar, compared to the scale (i. e., density). Notice the fine detail in the range from the tip of the nose throughout the chin, and the relatively sparse sampling in the area of the skull and at the bottom.
Another benefit of the automatic sampling density estimation is a very simple boundary detection method. This method builds on the one proposed by [Adamson and Alexa 04].
The idea is simply to discard points x with f ( x ) = 0, if they are too far away from a ( x ) relative to the sampling density in the vicinity of a ( x ). More precisely, we define a new implicit function
(7.10) |
Figure 7.24, right image, shows that this simple method is able to handle different sampling densities fairly well.
The geodesic kernel needs to determine the nearest neighbor p * of a point x in space (see Section 6.4.1). Using a simple kd-tree , an approximate nearest neighbor in 3D can be found in O (log 3 N ) [Arya et al. 98]. [4]
[Klein and Zachmann 04b] shows that all points p i influencing x can be determined in constant time by a depth-first or breadth-first search through the graph.
Overall, f ( x ) can be determined in O (log 3 N ).
To achieve a fast practical function evaluation, we also need to compute the smallest eigenvector quickly [Klein and Zachmann 04a]. First, we compute the smallest eigenvalue 1 by determining the three roots of the cubic characteristic polynomial det( B ˆ I ) of the covariance matrix B [Press et al. 92]. Then, we compute the associated eigenvector using the Cholesky decomposition of B ˆ 1 I . [5]
In our experience, this method is faster than the Jacobi method by a factor of 4, and it is faster than singular value decomposition by a factor 8.
[4] Under mild conditions, nearest neighbor can be done in O (log N ) time by using a Delaunay hierarchy [Devillers 02], but this may not always be practical.
[5] The second step is possible because B ˆ I is positive semi-definite. Let ( A ) = { 1 , 2 , 3 } with 0 < 1 ‰ 2 ‰ 3 . Then, ( B ˆ I ) = {0 , 2 ˆ 1 , 3 ˆ 1 }. Let B ˆ I = USU T be the singular value decomposition. Then, x T ( B ˆ I ) x = x T USU T x = y T S y ‰ 0. Thus, the Cholesky decomposition can be performed if full pivoting is done [Higham 90].