Question:
istinguish between Modeling, rendering,animation in computer graphics?
Amos
2006-11-22 10:27:35 UTC
computer graphics
Three answers:
zak_track
2006-11-22 10:45:12 UTC
Modeling is about describing your objects to the computer. For instance, you tell the computer, the wall goes here, the table goes here, etc. And you have to give it a description of the table as well; one way to do it is to list several points along the edges of the table and where they belong in relation to each other. After modeling, the computer "understands" what objects you want it to draw. Rendering is when the computer actually draws the objects. It involves calculations of how light beams will behave and what color they will be when they reach the viewing screen. Animation is creating a sequence of images that, when played in order, appear to show movement. It involves modelling how the objects will move, then rendering them, and it can also include effects such as motion blur.
Randolph
2016-05-10 04:10:07 UTC
Create Stunning 3D Animations : http://3dAnimationCartoons.com/?KbmK
mr nice
2006-11-22 10:34:17 UTC
First do Modeling then Animation and last step is RENDERING



Modeling



The modeling stage could be described as shaping individual objects that are later used in the scene. There exist a number of modeling techniques, including, but not limited to the following:



* constructive solid geometry

* NURBS modeling

* polygonal modeling

* subdivision surfaces

* implicit surfaces



Modeling processes may also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components — more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features.



Modeling may also include various activities related to preparing a 3D model for animation (although in a complex character model this will become a stage of its own, known as rigging). Objects may be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of the model. See also Forward kinematic animation and Inverse kinematic animation. At the rigging stage, the model can also be given specific controls to make animation easier and more intuitive, such as facial expression controls and mouth shapes (phonemes) for lipsyncing.



Modeling can be performed by means of a dedicated program (e.g., Lightwave Modeler, Rhinoceros 3D, Moray), an application component (Shaper, Lofter in 3D Studio) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modelling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D).



Particle system are a mass of 3d coordinates which have either points, polygons, splats or sprites assign to them. They act as a volume to represent a shape.



Rendering



Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life.



Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. Animations for non-interactive media, such as video and film, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to an hour or more for complex scenes. Rendered frames are stored on a hard disk, then possibly transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.



Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.



In real-time rendering, the goal is to show as much information as possible as the eye can process in a 30th of a second. The goal here is primarily speed and not photo-realism. In fact, here exploitations are made in the way the eye 'perceives' the world, and thus the final image presented is not necessarily that of the real-world, but one which the eye can closely associate to. This is the basic method employed in games, interactive worlds, VRML. However, the rapid increase in computer processing power, allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

An example of a ray-traced image that typically takes seconds or minutes to render. The photo-realism is apparent.

Enlarge

An example of a ray-traced image that typically takes seconds or minutes to render. The photo-realism is apparent.



When the goal is photo-realism, techniques are employed such as ray tracing or radiosity. Rendering often takes of the order of seconds or sometimes even days (for a single image/frame). This is the basic method employed in digital media and artistic works, etc.



Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera.



Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).



The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system.



The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.



Renderers



Often renderers are included in 3D software packages, but there are some rendering systems that are used as plugins to popular 3D applications. These rendering systems include:



* AccuRender for SketchUp

* Brazil r/s

* Bunkspeed

* Final-Render

* Maxwell

* mental ray

* POV-Ray

* Realsoft 3D

* Pixar RenderMan

* V-Ray

* YafRay

* Indigo Renderer


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...