Automated Vegetation Generation


    Most large-scale terrains built for simulator-based military training are bleak, desolate places that share a strong measure of commonality with desert environments.  One of the ongoing problems with relatively featureless environments is one's inability to grasp the scale of your surroundings.  It is nearly impossible to determine distances or speed in a world devoid of a single bush, tree, or surface detail necessary to establish depth cues.  With today's terrains of large polygonal meshes draped with blurry satellite imagery, such visual cues are almost entirely absent for the infantry soldier on the ground or the low flying helicopter or aircraft pilot. This situation must change to improve training effectiveness. 

    Within today's cockpit simulators, it is too easy for a pilot to quickly locate, identify and destroy targets on the virtual battlefield.  When the only object protruding from the terrain surface is an enemy tank with no cover to hide behind, the task of acquiring and destroying your enemy is vastly simplified to the point of providing negative training.  Adding vegetation to the synthetic environment makes training vastly more realistic and thus much harder.



    Such terrain characteristics are highly desirable and thus for detailed simulated environments, like those found in the America's Army game, a team of artists is hired to hand-craft a custom terrain database.  These databases are not only simulation system specific, limiting their reusability or interoperability with other simulations, but also take a great deal of time to create.  Additionally, such databases are limited to a player's expected actions and viewpoints.  If players deviate from the developer's expectations, they quickly discover places within the world that simply "do not exist".  These limitations prevent simulation scalability throughout the full spectrum of military operations.

    Terrain visualization techniques typically focus on optimizing the appearance of the terrain's surface.  Level-of-detail techniques can render realistic-looking vegetation objects within desired performance constraints.  We can combine these methods with imagery analysis to automatically construct vegetation-laden terrain based on readily-available source data (elevation, imagery, and land cover classification), adding plausible terrain details as needed.  Place these landscape construction and visualization procedures within a networked combat simulator and you have dramatically increased the difficulty of training exercises and improved our soldiers' chances in the field.



    The aim of our research is to replace the barren landscapes found within most 3D combat simulations with detailed terrain and natural surroundings that dramatically increase both the believability and difficulty of the training environment - matching the synthetic perceptual stimuli to the actual perceptual stimuli needed to execute specific training tasks.  We posit that there are many unmet visual cue requirements (e.g. vegetation) within existing simulators that are vital to the effectiveness of simulator-based training.  Our approach enhances the apparent quality of the given set of terrain elevation data and surface imagery, adds vegetation objects that are placed similarly to the arrangement within the actual environment, and generates a plausible synthetic terrain environment where data is missing or incomplete.

    Our algorithm uses on-demand, runtime processing of elevation data points to create a height field of the terrain.  Perlin noise is added to create surface details between the known elevation postings.  Using the SOARX continuous level of detail algorithm, we take our height field data and construct optimized triangular meshes based on the player's view frustum.  From this height field data, we create heightmaps, slope maps (with aspect angles), and relative elevation maps.  Over this height field data, we drape satellite imagery shaded at run-time with added detail maps corresponding to the noise-generated additions to the elevation data. 

    For vegetation placement, we process national landcover classification (LCC) data of the given terrain region to examine the closeness of related LCC types. Using this analysis and the topological influences derived previously (inspired by Johan Hammes' work), a probability map is produced for each LCC type to reflect the likelihood that a particular LCC type exists within the pixel's corresponding area on the terrain surface.  Random draws against this probability map determine the location, density, and type of vegetation objects found within the synthetic environment. The resulting geotypical distribution looks plausible and this simple algorithm can be extended to incorporate soil moisture and other factors or to generate geotypical distributions of man-made landscape features.

    We are investigating the use of machine learning techniques for automatically estimating the LCC distribution in such forms as tables, simple Bayesian belief networks, and neural networks.  The attraction of these techniques is twofold.  Firstly, they typically result in a formally-specified probability distribution whose assumptions and biases may be rigorously characterized.  Secondly, these techniques can be partially or fully automated, reducing the work load on a human modeler.  A liability of these techniques is that they require "training data" in order to function, e.g. a region in which the LCC values are known and provided to the system.  The tacit assumption here is that the provided training data is correct and has a similar distribution to the target locale to which the technique is being applied.



    We have seen that separate efforts in improving one particular aspect of terrain fidelity (e.g. CLOD techniques, billboard trees, imposter clouds, detail maps, etc) are not enough to give players the realism that tactical military training requires.  Thus, we have taken an integrated approach to incorporate these "best of breed" techniques within a single architecture.  While previously it has taken teams of artists to create static, small scale, custom-tailored landscapes, our approach automatically generates vast realistic terrains at runtime for any place on Earth.  With a minimum amount of shared source data and parameters, terrains can be synchronized easily between clients; guaranteeing the same terrain environment is created by all hosts within a heterogeneous networked simulation system.  Our terrains can be reused or regenerated afresh with new parameters in response to the needs of the training audience. With the simple change of a random number seed, a new terrain database can be generated without the need to manipulate a database.  This feature allows trainers the flexibility to use the same terrain repeatedly or create a new one each time; forcing trainees to not depend upon the static nature of most simulation databases. 

    The immediate and practical benefit of this work is that tactical training improves by giving players a more realistic environment in which to operate.  It is possible to have simulators and simulated forces engage in a multi-spectrum tactical conflict where the natural environment takes on an active role in the experience and is no longer simply a backdrop.  It is only at this point, when the ground looks real and foliage hides your view of the enemy that terrain can truly work towards becoming a full-fledged entity within the distributed virtual environment.





Slope map


Relative elevation map


Each color represents a different LCC type.


In this case, we are picking out LCC #42.


Perform third nearest neighbor to smooth the picked data points for LCC #42.


Darker conveys greater likelihood that LCC #42 objects will exist in that area.


In addition to an LCC's probability being influenced by the topography, points can be eliminated for busting various conditions (too low - blue, too high - green, or too steep - red).  This effectively removes any chance of that LCC type existing in that location (useful for masking out urban areas).