Camera Control

Constrained3D Navigation with2D Controllers Andrew J.Hanson Eric A.Wernert
Computer Science Department粉体输送阀>比例电磁铁
Indiana University
Bloomington,IN47405USA
Abstract
Navigation through3D spaces is required in many interactive graphics and virtual reality applications.We consider the subclass of situations in which a2D device such as a mouse controls smooth movements among viewpoints for a“through the screen”display of a3D world.Frequently,there is a poor match between the goal of such a navigation activity,the control device,and the skills of the average user.We propose a unified mathematical framework for incorporating context-dependent constraints into the general-ized viewpoint generation problem.These designer-supplied con-straint modes provide a middle ground between the triviality of a single camera animation path and the confusing excess freedom of common unconstrained control paradigms.We illustrate the ap-proach with a variety of examples,including terrain models,inte-rior architectural spaces,and complex molecules.
CR Categories:I.3.6[Computer Graphics]:Methodology and Techniques—Interaction Techniques.I.3.7[Computer Graphics]: Three-Dimensional Graphics and Realism.I.3.8[Computer Graph-ics]:Applications.
Keywords:Navigation;Constrained Navigation;Viewing Con-trol;Camera Control
1Introduction
Navigation in3D scenes,which we define as the process of select-ing a continuously-changing set of viewing parameters,is a long-standing challenge for computer graphics and visualization appli-cations.Computer animation,for example,requires the choice of a time sequence of camera models that can be considered as a one-parameter constraint;applicable techniques range from direct ori-entation ,[18,11])to rule-based systems[9,10]. The more complex task of interactive navigation has been consid-ered in a wide variety of contexts,ranging from the viewing of simple3D scenes on a desktop monitor to the control of fully im-mersive virtual reality environments.Examples of such viewing control methods run the gamut from orientation control paradigms (Brooks[4],Nielson and Olson[14],Chen et al.[5],Hanson[7], and Shoemake[20,21])to methods that intelligently focus on par-ticular scene points such as Mackinlay et al.[12],constraint-based camera pl
acement systems such as Phillips et al.[15],and general control systems such as those discussed by Ware and Osborne[25] and Drucker et al.[6].The use of constraints in view selection specifically for virtual reality has been used,for example,by Robi-nett and Holloway[16]to go beyond the usual“flying”modality, and by Billinghurst and Savage[2]in an expert system context.
In this paper,we focus on the problem of using a2degree-of-freedom controller such as a mouse to move effectively through a displayed3D environment with a particular task in mind;we as-sume that the system designer has at least some idea of how in fact to direct a naive user’s attention to those aspects of the scene needed to meet a chosen goal.We present some very specific fami-lies of techniques that may be used by the designer to constrain the user’s motion in ways that avoid the“lost-in-space”pitfalls of most airplane-style or helicopter-style controls with up to6(or more) degrees of freedom.Our fundamental notion is that,rather than controlling an unconstrained vehicle in3D space,the2D control device is actually moving the user on a constrained subspace,the “guide manifold,”a kind of virtual2D sidewalk.At every sample point of this virtual sidewalk,we may specify a“guidefield”con-taining all the information the designer wishes to supply to a cus-tomizable algorithm computing the viewing parameters for the user. Typically,both the guide manifold and the guidefields are specified only at sample points,and interpolation methods are used to deter-mine inte
rmediate values.The manifold itself may be continuous, may consist of disjoint pieces,or may even cross over itself to give it“Riemann-manifold”properties that let the traveler traverse a cir-cuit over and over to the same spot,and each time be presented with a new set of guide parameters.The parameters of the guide field may supply arbitrarily complex information to the designer’s algorithm;we illustrate the power of the idea using applications to terrain navigation,architectural structures,and complex molecules. An evaluation of several basic features of the paradigm is currently in progress.
Combining Displacement Constraints and Viewing Con-straints.There are several effective ways to construct a frame-work for constraint-based navigation in3D viewing situations.In the simplest version,we just extend the one-parameter camera path of a traditional animation to a two-parameter surface in3D space navigated by mouse strokes;each point of the surface incorporates afixed camera-modelfield.In many cases,the data themselves pro-vide a context of interest,and can thus be used to modulate afixed viewing-parameterfield relative to the source of interest.
Thefield variables may befixed a priori at key vertices using designer-specified camera models(orientation plus focal length) and interpolated among key vertices;or thefield variables may be computed from procedures combiningfixedfields,dynamic or static scene data,and current viewer
position and ,ve-locity).It then becomes the designer’s problem,not the viewer’s, to minimize the“lost in space”effect,and thus to optimize the viewer’s ability to focus on the task that is the goal of the navi-gation.
A related example of such a system was introduced for the ex-ploration of complex mathematical manifolds in Hanson and Ma [8].The key constraint in this original concept was the idea that every direction on a2D manifold implies a geodesic path deter-mined by the intrinsic geometry;the manifold itself provides a con-straint on the navigation by providing a“platform”on which the user walks and which continually rolls up to meet the viewer’s feet, keeping a constant relative orientation between the viewer’s ver-tical and the surface normal.This path automatically determines an orientation in response to directional changes of the2D mouse control.The more general concepts proposed in the current paper follow from the realization that the manifold on which the viewer is“walking”could in fact be an invisible sidewalk created for the
Controller  Space  Domain
Figure1:Diagram of the general mathematical concept of a guidefield and its ramifications.
purpose of seeing other things in the surrounding world,and that
the geodesic-constrained orientations can easily be replaced by a
completely arbitraryfield of quaternion orientations combined with
a tandemfield of focal lengths and additional viewing and control
parameters if appropriate.
Below,we propose several additional families of dynamic pro-
cedures for determining the current camera parameters in addition
tofixed key vertex values and the geodesic interpolation methods
of Hanson and Ma[8];these range from methods based on met-
ric relations between the navigation surface and the nearby scene or terrain,to methods that could be based on arbitrary rules in the
manner of Karp and Feiner,or Billinghurst and Savage[9,10,2].
While we focus here on2D mouse-based interfaces,the framework
clearly extends to immersive virtual reality environments,where the
virtual space of the control device can select points and orientations
in a3D volume,instead of simple2D mouse coordinates.We defer
exploration of such issues for the time being in order to focus here
on fundamental concepts of direct application to the most common
visualization systems.
2Fundamental Methods.
The basic idea behind our approach is the concept of mapping a
controller domain into a guidefield range consisting of the param-
eters needed to construct the scene image,possibly combined with
parameters modifying the influence of the controller.This is repre-
sented schematically in Figure1.We begin with a bare controller
position,assuming the implicit availability of heading and velocity information,and define a map from the
domain of the control device to the full space of parameters.In
principle the range of the parameter space can include anything,
even computed quantities.Thus we write
(1) where objects in the range include such things as
1.Camera position on guide manifold:the point in the universe
where the virtual owner of the device appears to be standing.
2.Camera orientation:where the virtual user is looking.
3.Camera properties:parameters such as focal length(wide an-
gle,telephoto lens),depth offield,and binocular convergence.
4.Viewing properties:fog,light attenuation,etc.
5.Control modifiers:mouse response,importance weighting,
etc.
6.Visualization application parameters:streamline characteris-
tics,particle source location,pseudo-color assignments,etc. By retaining successive values of thesefields in the control pro-
gram,the designer can also create rate-of-change-dependent re-
sponses.
For most practical purposes,the controller domain corresponds
locally to a path in the guide manifold that is equivalent to a surface
in the3D world.However,one can imagine applications in which
more general mappings might be useful.For example,one might
instead use the mouse position to vary a two-parameter camera ori-
entation,treat this orientation as the independent variable of the guide manifold,and treat spatial position as a dependent guide
field variable attached to each point of in the guide mani-
fold.Therefore,we retain all the scene-viewing parameters in a single data structure,and specify local2D patches with coordinate vertices in that parameter space that correspond to2D controller position.Each value of the independent controller variables then selects a particular set of ,one camera position and an orientation out of the space of possible viewing angles at that position).These dependent variables are typically determined by selecting samples on a lattice in the control space,and thus we must interpolate all these variables in tandem.Achieving smoothness in all variables is problematic,but can be addressed in various ways discussed below.
Winged Patches.The simplest relation mapping the controller space to the scene viewing parameters is generated by a single rect-angular patch in one-to-one correspondence with the2D mouse po-sition,as shown in Figure2a.To create navigable manifolds in more complex situations,we must sew together many of these fun-damental pieces to form a connected whole.The simplest practical way to achieve this is to require that the edge shared by two adja-cent patches be“winged:”that is,the curve representing the edge must contain pointers to the rectangular patches that share it,al-lowing a navigation algorithm to detect the end of one patch and implement a transition to the next patch.Figure2b illustrates a typ-ical structure that can be represented in this way;many interesting topological objects one might wish to represent,such as a sphere, require two or more such patches(for further details,consult any elementary text on differentiable topological manifolds).There are many ways one might handle winged patches in practice,and such
(a)(b)
Figure2:(a)A rectangular patch in mouse space(below),lifted to a guide surface in3D(above).(b)A network of rectangular guide patches pieced together into a generalized guide surface using winged edges to relate one patch to another.
issues as continuity and differentiability across the transition edges
are open to the designer;in some cases a smooth transition,achiev-
able using spline techniques,may be essential,and in other cases
a transition with a discontinuous derivative may create the desired
effect.
Modulation by Data.We can immediately go beyond the al-ready useful idea of having predetermined camera parameters at
each point of the navigable space by defining modifiers of the de-
fault parameters.In Figure7,we show the result of using the gra-
dient of the terrain elevation model as a cue:starting with an “up”direction aligned with the surface normal,we rotate the cam-
era by a weighted amount to turn gently towards the gradient into
the valley.
An explicit example is the following:at each point of the
coordinate-space guide manifold,determine the“heads up”direc-tion of the camera frame,the“look at”direction of the camera frame,and projection of the terrain gradient onto the plane per-pendicular to;then,if describes the angle between the projected terrain gradient and the camera gaze direction,one ro-tates the camera about the vector by,where max is the relative magnitude of the projected gradient strength.
Interest Vectors.Interest vectors are a generalization of the data modulation method of the previous paragraph.When the viewer is positioned at any point in a particular scene,the designer may record both viewer information,such as gaze direction,and a direction of interest in the scene appropriate to the current viewer state.These typically provide sufficient information to specify a context-based,weightable state change for the camera model.A typical example would compute the plane containing and and rotate about the direction normal to that plane,,by an angle that is either small,for passing interest,or sufficient to place ex-actly in line with,for very high interest.In other cases,the“up”direction of the camera frame may befixed or constrained,making a rotation about the forbidden;in such circumstances,we project onto the plane perpendicular to the“up”direction and use the projected vector as the interest direction instead,as in the data modulation example.
Interest vectors can easily be designed using“interestfields”related to the level-sets for implicit surfaces ,by Blinn[3].By defining a3D scalar function that is large near a selected family of scene points,the designer can use the gradient to specify where the user’s attention should be directed whenever the user draws near;the corresponding level-set implicit surfaces define manifolds of equal“attention importance”in the navigation space,and could be displayed optionally as navigation cues.Note that a separate interestfield can in principle be supplied for each parameter,allo
,the camera focal length,to be varied in-dependently in complex ways throughout the navigation. Sensitivity Fields.A number of applications have identifiable areas where one wants to have veryfine control,and others where one wants coarse control for quickly traversing large,uninteresting areas.We note two examples thatfit cleanly into our framework: (1)Velocity-based displacement.Several common mouse inter-faces have long supported this feature:the velocity of the mouse is measured,and as the speed increases,the overall displacement is amplified accordingly,allowing quick navigation to all corners of the screen.(2)Responsefield.Here,we just define a scalar field over the guide manifold and use it to magnify or reduce the bare controller displacement at each local point.Effects such as those of Mackinlay[12],could be achieved without the use of scale factors simply by refining the mesh near the critical points of the guide manifold.However,it is awkward to make the changes oc-cur smoothly in such a mesh,and the continuous scale changefield overcomes this.Figure3illustrates afield that causes very small responses in the foreground depression where the scale is0.1,and very large responses at the background peak,where the scale ap-proaches3.
3Designing Constrained Navigation Ap-plications
3.1Basic Components
Our constrained navigation paradigm in its basic form requires an interactively renderable3D scene plus the following:srvcc
Constraint Surface.A surface data structure every point of which can be reached in a predictable manner by incremental motions of a2D mouse.In practice,one would therefore al-most always use as building blocks rectangular arrays of3D points corresponding to projections onto the2D rectangular mouse coordinates.These can be joined as in Figure2b to form a patchwork of polygons that can be traversed incremen-tally.More complex surfaces(e.g,multiple coverings,multi-branched soap-bubbles)may be used in a similar fashion for
particular applications.The most intuitive constraint surface
is a sidewalk-like mesh of3D points,but nothing prevents us from ,latitude and longitude of camera orienta-tion.
Creating a constraint surface for a given problem can be facil-itated in some cases by studying the features of the problem.
For example,the toroidal navigation surface chosen in Figure 10is essentially a level set of the electr
on density.Complex topological objects and terrain models can provide their own initial navigation surfaces by creating parallel surfaces afixed distance away,or projected outward from the surface normals.
Many problems thus contain strong hints to guide the design of an appropriate family of constraint surfaces.
Camera Model Field.At each point of the constraint surface, the designer must attach those values of the camera model field complementary to the constraint surface(orientation if the constraint surface is spatial,position if the constraint sur-face is orientation,etc.).Thus at each point of the constraint surface array we typically construct a data structure consist-ing of the variables, which describe the3D position,the orientation in terms of a quaternion frame,and the focal length(or perhaps the camera frustum).In practice,thesefields would normally be specified at key vertices and interpolated to the intermediate points of the constraint surface.
3.2Interpolation
Given the normal situation where only afinite number of sam-ple points appear in the array of camera modelfields,we require to be interpolated at intermediate points.This is typi-cally accomplish
ed for rectangular sample spaces by taking local rectangular grids of anchor points and performing a bicubic Catmull-Rom spline interpolation,thus ensuring that all gridfield values are actually on the interpolated surface.Quaternions must be used to achieve smooth orientation interpolations as noted by Shoemake[18,19],and refined in subsequent work such as that of Schlag[17],Nielson[13],and Kim,et al.[11];2D rectangular ex-tensions of these methods are straightforward.Other variables such as the focal length and controller responsefield can be interpolated similarly in tandem.
However,experiments with our applications made it clear that one cannot in general produce interpolations based on arbitrary an-chor values that produce equivalent perceptions of smoothness in both camera position and orientation(or focal length,or whatever). If the knot points are equally spaced in spatial position,the orienta-tion changes may not be uniformly spaced,and vice versa.Among the solutions to this problem currently being investigated are:the adoption of a combined metric in the full parameter space to de-fine a hybrid variety of uniformly spaced knot points,the use of a dynamical model resembling a moving gyroscope that is solved to determine the camera motion,and a similar generalization of the method of Barr,et al.[1]to include spatial parameters as well.
3.3Methods for Determining the Camera Model
Field
We next present a selection of approaches that can be used to deter-mine the camera model structure at any particular point of a navi-gation path.
Constant key vertices.The simplest configuration utilizes a designer-supplied grid of constant camera parameters,along with a procedure for interpolation among the grid points.The predefined key vertex method is well-adapted to many classic applications,and
2
2
3
Scale
Figure3:A scalingfield that could be used,in regions of value greater than unity,to magnify the screen distance traversed by a unit mouse motion;similarly,in regions of value less than unity, thisfield would slow the mouse response to providefine-grained control in those limited areas where it is required.
can easily be understood(and even defined)as a family of deforma-tions of a singlefixed camera-animation path.
Space-walk frames and constrained“up”fields.The ba-sic manifold traversal method of Hanson and Ma[8]can be used with2D constraint manifolds of arbitrary complexity,and is ex-tensible to3D as well.Effective use of the method requires data stored in a winged-edge format rather than the simpler2D paramet-ric rectangular grid format that we have implicitly assumed for most of the discussion.The intrinsically defined transitions from poly-gon to polygon allow one to navigate a complex surface keeping the world“up”direction aligned with the surface normal through-out the transversal.While it is natural to have the gaze direction pointed in the direction of motion,this is not required;fixed cam-era parameters can be prestored at each vertex and modulated either by scene f
eatures or the default space-walk camera frame.
Another interesting variant is to specify only the“up”direction of the camera frame at each point(manually or from the normal to the constraint manifold);then the camera has a single rotational degree of freedom at each point that can be determined from the ,viewer velocity,or other data.
3.4Designer Techniques
There are a variety of techniques that we have found useful in prac-tice to enhance the utility,visual immediacy,andflexibility of the constrained navigation framework.Among these we note espe-cially the following:
Fog,Spotlights,etc.The actual scene appearance can equally well be modulated to suit the designer’s needs.We suggest the following methods:(1)Fog.As one passes through a scene,one can limit the visibility to a handful of key regions by obscuring the most distant objects.Other application-dependent depth cues can be used if appropriate.(2)Spotlights.Whether or not the camera model allows you to change its gaze,you can shine a spotlight on any desired sector to emphasize it.This is very easy in OpenGL, requiring only the definition of a few key-frame values of a direc-tion.The spotli
全自动挤出中空吹塑机ght need not be large,nor coincide with the gaze or motion directions.See Figure8for an example.
Figure4:An example of a navigation manifold that contains more than one possible layer,hence more than one possible cam-era model,depending on one’s route to the scene.
Vista Points.A fundamental context-defining technique avail-able in such a navigation system is the“scenic overlook.”This is very much like an overlook on a vacation highway,except that the signposts and annotated vista points can be placed anywhere in3D space continuously connected to the sidewalk.As the viewer ap-proaches the critical vista point itself,changes in the focal length, camera orientation,and control response can be imposed by the de-signer to exactly emulate features such as Mackinlay et al.’s[12] controlled approach,or even“dynamicfield glasses”that focus i
n on distant scene features as though one had donned zoomable binoculars to pan across the scene of interest,similar to one sce-nario of Robinett and Holloway[16].An example is given in Figure 9.
Multiple Coverings.Another fundamental technique is the “multiple covering”navigation surface.(Readers with mathemat-ical backgrounds will recognize this as a relative of Riemann sur-faces in complex variable theory.)Here,one creates a surface that may come back to the same point by many different routes;a simple example is a double ribbon,as shown in Figure4,which allows the camera to point in one family of directions thefirst time around the ribbon,in other directions the second time,and to return to the orig-inal state the third time around.An explicit application is depicted in Figure11.The reader can imagine arbitrarily complex variants, including instantaneous state transitions between entirely different guidefields.
3.5Dynamic Mapping Techniques
Several prospects for more complex control strategies appear promising for future work.
Lead time.Sometimes we want to have the system react to where we will be,not where we are.This leads one to implement virtual navigation avatars(we might call them“navatars”)sailing in front of the viewer,and requires some predictive computation. Once the hypothesized avatar position is determin
ed by an appro-priate algorithm,the designer can present varying options tying the motion more or less closely to the avatar,or perhaps allowing di-versions in the avatar’s path.
Viewer state procedures and rules.The user state in a nav-igation problem contains a number of variables that can be tracked and computed,particularly those involving velocity and heading history(,some of the techniques reviewed in Chen et al.[5]).Arcade games often exploit such information,particularly to add challenge to a control strategy by preventing direct manipu-lation of the object to be controlled.In physical simulations,mo-mentum,friction,and air resistance play a crucial role in making driving andflight simulators realistic.Such factors can be incorpo-rated into the procedures or rules determining the evolution of the camerafield on the constraint surface to accomplish a number of intuitive physical effects.
Context-based rules.A variety of approaches have been pro-posed in the literature to use context-based knowledge,expert sys-tem domain rules,and artificial intelligence planning methods to determine transitions among camera positions in animation or even complete animation paths(,[9,10,2]).It is clearly appro-priate to apply such techniques to the more general philosophy of constrained navigation proposed here;this is a fertile area for future research.
一氧化氮合成酶
4Examples
In this section,we present a series of examples realized by imple-mentations using the Open Inventor class libraries in the IRIS Ex-plorer and Open Inventor environments;we note in particular that many of the needed quaternion-based classes and methods are al-ready supplied.We implemented our own Catmull-Rom interpola-tor based on the Schlag algorithm[17].
Wandering Camera Path with Wandering View.In a tra-ditional computer animation,the camera itself may follow many different constraints such as looking at a single point on the ground throughout the motion,tracking a moving object in the scene,or staring in afixed direction.Figure5(a,b)shows a generalization of the latter with the viewer’s trajectory confined to a plane.In Figure6a,the path is still constrained to the plane,but designer-placed camera orientations are used as key vertices for a quaternion spline interpolation;Figure6b shows the scene viewed from the same point as Figure5b,but with the modified camerafield. Terrain Navigation:Conservative Flight Path.In Figure 7,we show a more realistic guide manifold for navigating a terrain model;we employ a contoured3D constraint surface and constrain the camera“up”vector to be the surface normal.The camera orien-tation at each point is determined by rotating relative to the constant gaze direction to look slightly in the direction of the terrain gradient below.We note that we need not require a global“up”direction; if desired,we can transition smoothly from“right-side-up”in the world to“upside-down”(see below).
Spotlight Attention Focus.An example of the spotlight tech-nique,which can be used to focus the user’s attention on a point that is not necessarily aligned with the direction of the camera gaze or the direction of motion,is shown in Figure8.
Terrain Navigation:Vista Point Ahead!A tour designer in the paradigm presented here has not only the ability to keep wan-dering users in a limited set of viewpoints and to keep their attention focused only on what they are supposed to see,but also to prepare special treats.In particular,the constraint surface itself may vary dramatically,and the focal length can be controlled and interpolated throughout the grid just like the other variables.In the scenario pre-sented in Figure9,the designer has placed two“vista points”in the scene which the user may approach at will while roaming the con-straint space.Figure9a focuses on one particular point that causes the user to rise rapidly above the world to a very high vantage point,
(a)(b)
Figure5:Camera path constrained to plane withfixed camera orientation.(a)View of path and camera model control points on constraint surface.(b)View using camera modelfield at selected羊毛鞋垫
point.
(a)(b)
Figure6:Camera path constrained to plane with camera orientation modulated by terrain gradient.(a)View of path and camera model control points on constraint surface.(b)View using camera modelfield at selected point.
while the camera is forced to look down below at the retreating scene data,creating the view of Figure9b.Figure9c is rather like a highway rest stop,where approaching a particular point on the constraint surface swings your gaze direction around,points at a landmark you might never have noti
ced otherwise,and puts a“tele-photo lens”on the camera so that the view automatically zooms in on the point in question.
Molecule Navigation.The most challenging applications for constrained navigation involve the perusal of objects with no nat-ural orientation.Here we have both the advantage of being per-mitted greatflexibility,and the drawback of having to decide on a particular guiding strategy.Figure10a shows how we have cho-sen a toroidal navigation manifold that entirely envelops a helical molecule.This constraint surface allows us to move quickly to ev-ery conceivable viewpoint on the molecule with a series of very simple mouse strokes.To keep the user in context,we make the “up”direction inside the molecule the same direction as outside, while tilting a bit at the top and the bottom to keep focused on the structure and give a clear end-on view,as shown in Figure10c. Here the goal of the navigation was to give the viewer afluid way to see every conceivable surface,inside and outside,of the virtual cylinder around which the helical molecule is wrapped. Architectural Interior Navigation.More complex topologies arise naturally when we examine detailed3D structures such as buildings and room interiors.Here it is natural to include new levels of constraints and choices.In the example of Figure11,we restrict

本文发布于:2024-09-22 05:32:09,感谢您对本站的认可!

本文链接:https://www.17tex.com/tex/1/102779.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:输送   比例   挤出   鞋垫   吹塑机
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2024 Comsenz Inc.Powered by © 易纺专利技术学习网 豫ICP备2022007602号 豫公网安备41160202000603 站长QQ:729038198 关于我们 投诉建议