SlideShare une entreprise Scribd logo
1  sur  19
Rendering Complex Models Using Weighted
Blended Order-Independent Transparency in
WebGL
By Samuel Cosgrove, Brock Stoops
MAP/COP 3930H
Abstract: Attempts at solving the problem of order-independent transparency have found
solutions that, while valid and useful, tend to require high power machines that may not fit
standard graphics architecture. However, recent breakthroughs in the technique of weighted
blended order-independent transparency have opened possibilities of rendering accurate
transparency on low-spec hardware and low-spec versions of OpenGL. Implementations of the
technique have been successfully demonstrated in WebGL. However, most of these demos only
attempt to render quads or simple primitives. We test the efficiency of rendering many models
of growing complexity utilizing weighted blended order-independent transparency. We
compare the performance and accuracy to standard alpha blended transparency, and attempt
to identify contributing factors to render time increase between these methods.
2014
P a g e | 2
Introduction
One of the properties of the real world that has been a challenge since the inception of
graphics study is transparency, specifically the effect of occlusion on objects of varying levels of
transparency. The class of techniques for rendering transparency can be divided into two
groups: order-dependent strategies such as alpha blending and order-independent
transparency (McGuire & Balvoil, 2013, p. 2).
The study of order-independent transparency (henceforth abbreviated as OIT) has
increased in recent years. But these techniques usually target high-end machines and
occasionally utilize non-standard graphics hardware to address the bleeding-edge possibilities.
Of note, however, is a recently proposed method of OIT called weighted blended OIT (McGuire
& Balvoil, 2013, p. 2). This uses a function related to the distance from the camera to change
the values of opaqueness, rendering multiple transparent layers with greater accuracy.
Furthermore, this method is compatible with OpenGL variants for embedded systems, including
WebGL. Demonstrations have been made utilizing the technique in WebGL, but these have
usually attempted to only render simple primitives like quads, cubes, or spheres.
We will attempt to render multiple complex models at once utilizing weighted blended
OIT (henceforth abbreviated to WBOIT). We will explore traditional methods of transparency
rendering, in comparison with WBOIT. We will run simulations with these techniques, rendering
transparent models of increasing complexity. We will discuss these results and what
observations can be made about what factors impact the processing time.
Background
A. Alpha Compositing
Alpha compositing combines a rendered image with the background to create the
appearance of full or partial transparency.
It requires an alpha value for each element to determine where color should be drawn
and where the element should appear empty (Carnecky, Fuchs, Mehl, Jang, Peikert, 2013, p.
839). Alvy Ray Smith created the alpha channel in the late 1970s to store the alpha information
about each pixel. Values between 0 and 1 inclusive were stored for each pixel to denote values
of transparency and opaqueness. In our demo each pixel color is displayed as an RGBA tuple
which would look like this:
(0.25, 0.5, 0.0, 0.5)
This represents the pixel having a 25% maximum red intensity, a 50% maximum green
intensity and 50% opacity. A value of 0 would represent that color being not visible in the pixel
or completely transparent, and a value of 1 would be a maximum value of color or a completely
opaque pixel.
The alpha channel can express alpha compositing utilizing compositing algebra. The
most common operation in compositing algebra is the over operation. This denotes that one
image element is in the foreground and the other is in the background. More simply, one is
over the other (Bavoil & Myers, 2008, p. 2). The formula below can be used on each pixel to
find the result of the image element overlaying.
C0 = Ca αa + Cbαb (1 – αa)
P a g e | 3
In the above formula CO is the result of the over operation, Ca is the color of pixel a and
Cb is the color of pixel b. αa is the alpha of the pixels in element a and αb is the alpha of the
pixels in element b (Bavoil & Myers, 2008, p. 4). When dealing with the merging of layers, an
associative version of this equation is generally used:
CO = (1/αO) [Ca αa + Cbαb(1 – αa)]; αO = αa + αb(1 - αa)
B. Alpha Blending
Alpha blending is an extension of alpha compositing, combining a translucent
foreground color with a background color. These layers combine and blend together to create a
new color (Liu, Wei, Xu, Wu, 2009, p. 1). Alpha blending is described by the equation below.
DestinationColor.rgb =
(SourceColor.rgb*SourceColor.a)+(DestinationColor.rgb*(1-SourceColor.a))
However, this equation requires that source colors are multiplied at the time of pixel
calculations, which can be very time inefficient when generating millions of pixels at a time
(Salvi, Montgomery, Lefohn, 2011, p. 120).
C. PartialCoverage
Partial Coverage is a phenomenon when a surface transmits fractions of light without
refraction, and that causes the surface to emit or reflect light. Partial coverage is used to
emulate different modeling surfaces, such as a wheel modeled as a square. It is used when
overlaying multiple layers of the same or similar images, and measuring the net coverage.
Figure 1: Demonstration of partial coverage (McGuire & Bavoil, 2013 p. 126).
In the above picture there are three screens of identical height and width and a 50% net
coverage per screen. In (a), all three screens are perfectly stacked, so the total image has a 50%
net coverage. In (b) they are aligned in some random formation. This causes a total net
coverage for the image to be somewhere between 50 and 100%. In (c) they are stacked with
perfect alignment, causing the total net coverage to be 100%
Partial coverage is calculated similarly to alpha blending. Instead of alpha measuring the
opacity, it measures net coverage of the layers. As with alpha blending, it requires pre-
P a g e | 4
multiplied values for the color of the pixel (Enderton, Sintorn, Shirley, Luebke, 2011 p. 2). This
increases rendering speed and avoids color bleeding when α is zero. Pre-multiplied color values
allow zero alpha values (no coverage) to still add intensity.
Cf = C1 + C0(1 – α1)
This equation calculates partial coverage of two colors C1 and C0, where C0 is the
background and C1 is the foreground (Enderton et al. 2011 p. 2). Again, in this equation α
stands for the percent of the image covered. A value of 1 means 100% of the image is covered.
A value of 0.5 means that 50% of the image is covered. The most common partial coverage
technique used today is a sorted over compositing, although it is not the most accurate.
D. Order-IndependentTransparency
OIT differs by not requiring rendering geometry in sorted order(Maule, Comba,
Torchelsen, Bastos, 2013, p. 103). Ordering takes up processing time per frame, and does not
always produce the most accurate image (Everitt, 2001, 1). OIT is that it would sort the
geometry per-pixel after rasterization. The previous equations for partial coverage and alpha
blending are not commutative, and therefore order-dependent. Those techniques are
calculated and rendered from back to front, layer on top of layer (Maule et al. 2013, p. 103). In
order to get an OIT, we would need to equate the values using commutative operations like
addition or multiplication.
E. Weighted Blended Order-IndependentTransparency
The main focus of WBOIT is that, when blending the layers together, different weights
are given to colors based on the distance from the camera. This prevents surfaces with similar
coverage from overpowering the average color, which happens in other OIT techniques. The
following equation was proposed by McGuire and Bavoil in their paper on OIT (2013, p. 128).
The weight function w(zi,αi) can be used with any monotonically decreasing non-zero
function of z. This function creates an occlusion cue between images and layers, allowing for
very accurate transparency. McGuire and Bavoil proposed several different weight functions to
attempt the best image accuracy (2013, p. 128). The weight functions require a large variance
to compensate for the small values for distance from camera. The designed weight functions
are listed below:
P a g e | 5
Where the function d(z) in this case is defined by:
Where the z values are the distance from the camera (McGuire & Bavoil, 2013, p. 128).
Although in some cases the results of the image using WBOIT look similar to the regular alpha
blending using the over operator, WBOIT still theoretically runs more efficiently by not
requiring an order dependency or any sorting. One of the downfalls of WBOIT is that it is not
translation-invariant along the depth axis, which means changing the distance of the image
from the camera will change the color of the whole image (McGuire & Bavoil, 2013, p. 128).
Method
Our WBOIT Environment is an extension of prior work done by Alexander Rose with his
single mesh OIT demo (Rose, 2014), itself derivative of the WBOIT method described by
McGuire and Bavoil. Our work is adapted to render many objects made of complex meshes. It
renders a scene with 64 models, each with a random color, arranged in a 4x4x4 cube. Model
and rendering style are determined by on-screen controls. It also allows for real time mouse
input for camera control. Due to limitations in Chrome for file reading, the environment is
currently only compatible with Firefox.
A. WBOIT EnvironmentFile Structure
Our environment is arranged like many WebGL projects, with an HTML front end, and
script files that utilize GLSL shaders. We designed the file structure and separation to
encapsulate functionality for reuse and ease of understanding.
Figure 2: File stucture of WBOIT Environment.
P a g e | 6
The environment is instantiated and managed with the “mainScript.js” script file. It
interfaces with other scripts and files in order to generate the desired environment. Its purpose
is further described in Section B. Program Flow.
The “lib” directory contains other scripts. Inside is a “renderManager.js” script which
encapsulates model loading and rendering. It is further explained in “Section D. Renderer”.
We make extensive use of third-party scripts to facilitate prototyping our simulation.
The THREE Javascript library, created by Ricardo Cabello, is a commonly used WebGL graphics
library. It simplifies major components of setting up and rendering an environment.
The “stats.js” script, also developed by Ricardo Cabello, displays an on-screen widget
with real-time performance statistics. We modified it slightly to facilitate capturing data for
experimental output. “TrackballControls.js”, developed by Eberhard Graether and Mark Lundin,
is a commonly used extension for WebGL projects to facilitate camera control. “Detector.js”,
developed by Ricardo Cabello and “AlteredQualia”, checks WebGL compatibility of a browser.
Our “model” directory contains five model files, in JSON format: “cube”, “teapot”,
“house”, “house_of_parliament”, and “dabrovic-sponza”.
The “shaders” directory contains all our vertex and fragment shaders in OpenGL Shading
Language (GLSL) code. These will be extensively described in Section D. Renderers.
B. Model Loading Pipeline
Initially, a model name is given to the “loadNewModels()” function. For each position in
the 4x4x4 cube, “addModel()” is run, with parameters of position, a random color with alpha
value of 0.5, and the model name.
The “addModel()” function first runs “modelLoad()” to generate the model as a
THREE.Object3D instance. This model is colored, positioned, and added to the scene.
The “modelLoad()” function retrieves mesh data from the model JSON file. This is
passed to the “generateModelFromInfo()” function to get the THREE.Object3D instance. For our
purposes, we scale the model based on the minimum and maximum bounds so it fits into an
approximate 1x1x1 square in the world.
"meshes": [
{
"vertexPositions" : [
-1,0,-1,
1,0,-1,
-1,0,1,
1,0,1
],
"vertexNormals" : [0,1,0,0,1,0,0,1,0,0,1,0],
"vertexTexCoordinates" : [[0,0,1,0,0,1,1,1]],
"indices" : [0,1,2,2,3,1],
"materialIndex" : 0
}
],
Figure 3: Each model has meshes, each with vertices and indices to determine order of triangles drawn.
P a g e | 7
C. Renderers
The opaque renderer and transparent renderer are simple renderers, the former for
solid color and the latter supporting transparency with standard alpha blending. Below are the
rendering materials defined for both.
// Standard opaque material
material = new THREE.RawShaderMaterial( {
vertexShader: parseDoc( './shaders/vertexShader.js' ),
fragmentShader: parseDoc( './shaders/fragmentShader.js' ),
side: THREE.DoubleSide,
transparent: false
});
// Standard alpha blended material
transparentMaterial = new THREE.RawShaderMaterial( {
vertexShader: parseDoc( './shaders/vertexShader.js' ),
fragmentShader: parseDoc( './shaders/fragmentShader.js' ),
side: THREE.DoubleSide,
transparent: true
});
Figure 4: The opaque and transparent render materials, instances of THREE.RawShaderMaterial.
Note that the THREE library allows for a simple transparency flag to internally support it.
Rendering objects with these renderers is as simple as calling a library function from THREE to
render. Below are the vertex and fragment shaders that both types of renderers use.
// Standard color to pass to the frag shader
fragColor = color;
//standard calculation of position of vertex, to pass to frag shader
gl_Position = projectionMatrix * modelViewMatrix *
vec4( position, 1.0 );
// Calculate depth frag coordinate, relative to the window
vec4 fragCoord = modelViewMatrix * vec4( position, 1.0 );
fragZ= fragCoord.z;
Figure 5: Main function for standard vertex shader.
vec4 color = vec4( fragColor );
gl_FragColor = color;
Figure 6: Main function for standard fragment shader.
The WBOIT process is more intensive. It requires a three-pass algorithm. One pass
renders accumulation color to a texture. One renders “revealage”, or the opposite of coverage,
to another texture (McGuire & Bavoil 2013, p. 129). A final pass composites these two textures
into the final rendered frame.
P a g e | 8
Below is the function in “renderManager.js” that renders a frame with WBOIT. Note that
after the accumulation and revealage textures are made, the textures are passed into the
compositing uniforms before the composite render pass.
renderer.clearColor();
// Render accumulation texture
scene.overrideMaterial = accumulationMaterial;
renderer.render( scene, camera, accumulationTexture );
// Render revealage texture
scene.overrideMaterial = revealageMaterial;
renderer.render( scene, camera, revealageTexture );
// Add textures to compositing shaders
compositingUniforms[ "texAccum" ].value = accumulationTexture;
compositingUniforms[ "texReveal" ].value = revealageTexture;
// Render composited frame
renderer.render( compositeScene, compositeCamera );
scene.overrideMaterial = null;
Figure 7: WBOIT render function code.
The accumulation and revealage render passes each utilize the standard vertex shader,
but the fragment shaders are unique. Below is the code for these shaders. They share the same
weight function for alpha values, to facilitate the summation process described for WBOIT.
float alpha = fragColor.a;
// Scale color based on alpha value
vec3 Ci = fragColor.rgb * alpha;
// Further scale color and alpha based on weighted alpha
gl_FragColor = vec4( Ci, alpha ) * w( alpha );
Figure 8: Main function for fragment shader to create accumulation texture.
float alpha = fragColor.a;
// Calculates alpha based on the weighted alpha value of
// the coordinate
gl_FragColor = vec4( vec3( w( alpha ) ), 1.0 );
Figure9: Main function for fragment shader to creat revealage texture.
P a g e | 9
float colorResistance = 1.0;
float rangeAdjustmentsClampBounds = 10.0;
float depth = abs( fragZ );
float orderingDiscrimination = 200.0;
float orderingStrength = 5.0;
float minValue = 1e-2;
float maxValue = 3e3;
return pow( a, colorResistance ) *
clamp(
rangeAdjustmentsClampBounds /
( 1e-5 + (depth/ orderingStrength) +
(depth / orderingDiscrimination) ),
minValue, maxValue
);
Figure 10: Weight function used by the fragment shaders. Takes in alpha value a and scales based on distance from camera.
texCoords = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
Figure 11: Main function for composite frame vertex shader.
Below are the vertex and fragment shaders for this process, which composites the RGB and
alpha values from the textures into the final rendered frame.
// Color from accumulation texture at coordinates
vec4 accum = texture2D( texAccum, texCoords );
// Alpha from reveal texture at coordinates
float reveal = texture2D( texReveal, texCoords ).r;
// Composites above to calculate color at fragment
gl_FragColor =
vec4( accum.rgb / clamp( accum.a, 1e-9, 5e9 ), reveal );
Figure 12: Main function for composite frame fragment shader; composites color from accumulation texture and alpha from
revealage texture.
D. ExperimentalProcedure
All of our simulations were performed on a laptop running a 2.0 GHz Intel Core i7
2635QM CPU, with four independent processor cores. The GPU is an AMD Radeon HD 6490M
with 256 MB of dedicated GDDR5 memory. The hard drive is a Seagate 1 TB Solid State Hybrid
Drive SATA 6Gbps 64MB.
We had a total of 15 experimental conditions, all combinations of model type(Cube,
Teapot, House, Parliament, and Sponza) and rendering type (WBOIT, opaque, and transparent).
For each condition, we recorded the MPF (henceforth abbreviated to MPF) over the course of a
P a g e | 10
minute. The scene being rendered was rotated in an arbitrary manner for the length of the
recording. The output was transferred to spreadsheets for further analysis.
Results
Below are the results for the rendering of the Cube model, along with an image of both
the WBOIT(left) and the regular alpha blending transparency(right).
Figure 13: Cube scene MPF over a minute.
Figure 14: Cubes rendered with WBOIT (14a, left) and regular transparency (14b, right).
Figure 14a, when compared to Figure 14b, looks smoother, with each individual layer
visible. Figure 14b loses transparency information when many models are stacked. Each layer is
clearly visible in the WBOIT technique. Though the transparency has much more depth in the
WBOIT method, it comes at a notable cost of efficiency. The average MPF are noticeably larger
than regular transparency. The graph shows that the normal range of frames per second falls
between 60 and 80 with an average amount of 68.188 fps. The transparency as shown in Figure
13 is between 50 and 70 for an average amount of 54.838. The difference in the techniques
yields an approximate 20% decrease in efficiency.
P a g e | 11
Figure 15: Teapot scene MPF over a minute.
Figure 16: Teapots rendered with WBOIT (16a, left) and regular transparency (16b, right).
Figure 15 displays a continuing trend; WBOIT in our simulation is less efficient than
standard transparency. Though the teapot model is more complex than a cube, the efficiency
and speed of creating the scene is very similar. The WBOIT teapot scene takes somewhere
between 60 and 80 milliseconds to render the frame with an average value of 66.89
milliseconds. Standard transparency takes between 45 and 65 with an average value of 55.62
milliseconds to render a frame. This makes the regular transparency technique in this case
approximately 17% faster for this model type. Of note is that teapot transparency model looks
very close to the WBOIT model. Figure 16b shows many depth layers of teapots stacked on top
of each other with even the furthest most layer being clearly visible. WBOIT still provides a
higher degree of information, as shown in 16a.
P a g e | 12
Figure 17: House scene MPF over a minute.
Figure 18: Houses rendered with WBOIT (18a, left) and regular transparency (18b, right).
Figure 17 shows the same trend of WBOIT being less time efficient. For the house scene,
the difference in performance increases. In Figure 17 we can see WBOIT hovering around the
120 to 140 range, with only a few outliers going below 100 MPF. This gives us an average time
of 116.06 milliseconds taken to render a frame for the model shown in Figure 18a. Standard
transparency hovers between 75 and 100, barring a few outliers. The average time in our
recordings was 83.24 milliseconds taken to render a frame. This is a decrease in efficiency
around 30%.
P a g e | 13
Figure 19: Parliament scene MPF over a minute.
Figure 20: Parliament models rendered with WBOIT (20a, left) and regular transparency (20b, right).
The graph in Figure 19 demonstrates how the interval between frames is growing. The
Parliament model, as shown above in Figure 20a and 20b, is the most complex model tested
yet. Since the Parliament model is very complex, with over 80 meshes of varying complexity,
fewer data points could be generated due to the slow render of each frame. In the first three
models we had over 1000 data points in each minute of testing, but the Parliament model only
had 200. The WBOIT data line is hovering around the 350 millisecond mark, with about a 25
millisecond leeway in each direction. By far this is the slowest model render thus far. Standard
transparency hovers between the 150 and 200 millisecond range. We have an average value of
P a g e | 14
348.17 for WBOIT and 174.92 for the regular transparency technique. This results in standard
transparency rendering 50% faster than the WBOIT technique.
Figure 21: Sponza scene MPF over a minute.
Figure 22: Sponza models rendered with WBOIT (22a, left) and regular transparency (22b, right).
Figure 21 shows the data for the Sponza model, the slowest performing model. This is
likely due to the large number of vertices. The values of WBOIT are the slowest yet, hovering
between 1500 and 2000 MPF. Standard transparency stayed between 700 and 1300 throughout
the recording process. The Sponza model only gave around 30 data points in the whole minute
of recording for WBOIT, taking almost two whole seconds to render a single frame. Average
time for WBOIT was 1733 and transparency was 833. In this model the transparency technique
rendered over twice as fast as the WBOIT method. The color difference in the different
techniques is most noticeable in this model as we can clearly see all of the individual layers in
the WBOIT shown in Figure 22a. With standard transparency in 22b, it is hard to see the models
behind the closet rendered one. The hollowed portion of the model is the only way to notice
the color differences of the models behind it.
P a g e | 15
Figure 23: Measure of average MPF against number of vertices.
Figure 23 shows the relation between number of vertices and MPF. We can see as the
number of vertices increases the time it takes in MPF also increases. Figure 23 shows the slower
increase in amount of MPF for transparency compared with the WBOIT method, which explains
why the distance between the data points increased with the more complex models. Both
transparency techniques most resemble a second order polynomial equation. The WBOIT line is
represented by the polynomial regression:
y = 0.0000002x2 + 0.013x + 77.723
The regular transparency is represented by the polynomial regression:
y = 0.0000001x2 + 0.0051x + 60.718
P a g e | 16
Figure 24: Measure of average MPF against number of triangles.
The graph in Figure 24 shows the relation to how fast the frames were rendered vs how
many triangles were drawn for the model. As the number of vertices increases the time it takes
in MPF also increases. A second order polynomial regression seems to be the best fit for both
trend lines. The WBOIT line is represented by the polynomial regression:
y = -0.0000004x2 + 0.0538x + 65.406
The regular transparency is represented by the polynomial regression:
y = -0.0000002x2 + 0.0226x + 55.37
The amount of triangles compared to the time taken to render the frame according to
our data has a negative coefficient for the highest order polynomial, meaning that at some
point there would be a limiting value for the MPF.
It did not seem like our date led to this point given the models we tested, but in order to
get the most accurate representation we would need to have a larger number of different
models. This model might not even be accurate because of that lack of data points.
P a g e | 17
Figure 25: Measure of average ms/frame against number of meshes. Because there is a jump at the data point for the Sponza
model, it seems to indicate there is no correlation with time complexity and the number of meshes.
The graph in Figure 25 show the relation of time taken to render a frame compared
against how many meshes the model has. The outlier point corresponds to the Sponza model,
which has 38 meshes, less than the house’s 85. Yet the Sponza model had larger MPF values.
This seems to indicate that the number of meshes is not a measure that correlates with frame
rendering.
Conclusion
WBOIT has significant performance issues with rendering many transparent models of
large complexity. It is a noticeable difference compared to alpha blending transparency
technique. The loss in the speed of rendering increases as the model gets more complex. The
MPF for the WBOIT simulation we performed can be represented by the following equation: y =
0.0000002x2 + 0.013x + 77.723 where x is the amount of vertices. By comparison, the MPF for
alpha blending in our simulation is modeled by the following: y = 0.0000001x2 + 0.0051x +
60.718. Both equations are second order so we can conclude they will get slower quicker as the
complexity of the model increases. But the coefficient for the highest order term is half the size
for alpha blending, indicating a greater increase for WBOIT as number of vertices increases.
The results we have found suggest that, while WebGL is fully capable of rendering
WBOIT objects, even complex ones, the time complexity grows faster than it would for
standard transparency techniques. WebGL version 2.0 is based on an older specification of
OpenGL for Embedded Systems (2.0 vs current 3.1), so there are features that are not currently
available to WebGL that could help in improving this algorithm.
But for now, we conclude that WBOIT is feasible to use in WebGL, but only sparingly.
When creating scenes in WebGL with large amounts of complex models, standard alpha
blending is more efficient. When rendering graphics in a web browser, or with a low powered
graphics card, the increase in accuracy of layered transparency does not make up for the up to
50%+ performance decrease. When deciding between how to render graphics in this
P a g e | 18
environment, it is important to weigh the cost of efficiency with the increased smoothness of
the graphics.
Future Research
As we proceeded with developing this program, we noticed our experimental design
could have been better. Our descriptions of color and shape accuracy are currently subjective
observations. We will seek better definitions of these properties from prior literature to make
any future analysis more quantitative. We would also simulate a greater spectrum of vertex
inputs to get a more continuous model to reference, rather than relying on a questionable
regression.
Our experimental procedure could have been more controlled. Currently, we leave a
chance of user error for starting/stopping data recording and unpredictable mouse movement
of camera to “tax” the rendering process. In the future we would automate and control
movement of the camera and precisely stop data collection after a user set time.
Javascript and HTML are constantly refreshing standards, which recently have made file
output difficult with the standard API due to security concerns. This is the reason why we
output log data to a text area in our current implementation. Future research in effective log
file output will be done for the future
We initially intended to allow rendering opaque and non-opaque objects using WBOIT,
or even rendering some objects with alpha transparency, WBOIT, and opaque in the same
scene. Rendering alpha transparent and opaque objects is already doable in our
implementation by making alpha value equal 1. However, a combination of limitations of the
algorithm, and our current pipeline with THREE, made it difficult to render opaque objects with
WBOIT.
WBOIT shading seems to be unable to render fully opaque objects without making them
transparent. This is actually a simple fix: McGuire and Bavoil in their own code snippets require
opaque objects render before any WBOIT transparent objects (2014, p. 131). All we would need
to do is the following:
1. Instead of changing materials globally for the entire scene per pass, we would need to
swap material on a per-model basis
2. We would render all opaque objects as a preliminary pass.
3. We would then render the transparent objects with WBOIT.
One of the reasons we did not implement this, however, is this per-model architecture
would have changed our time complexity from the data we had already started collecting. So
we left this for future studies to implement.
Due to funding constraints, and wanting to benchmark for lower-spec hardware, we could
only run simulations on the aforementioned laptop. In future studies, we would utilize multiple
platforms for benchmarking to determine the role of CPU/GPU specifications in the algorithm’s
performance.
P a g e | 19
References
McGuire, M., & Bavoli, L. ()2013. Weighted Blended Order-Independent Transparency. Journal
of Computer Graphics Techniques. 2(2): p. 122-141.
Maule, M., Comba, J., Torchelsen, R., Bastos, R. (2013). Hybrid Transparency. in Proceedings of
the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 103-118).
New York, NY: ACM
Carnecky, R., Fuchs, R., Mehl, S., Jang, Y., Peikert, R. (2013). Smart Transparency for Illustrative
Visualization of Complex Flow Surfaces. IEEE Transactions on Visualization and
Computer Graphics. 19(5): p. 103-118.
Bavoli, L., & Myers, K. (2008). Order Independent Transparency with Dual Depth Peeling. Nvidia.
Liu, B., Wei, L., Xu, Y., Wu, E. (2009). Multi-Layer Depth Peeling via Fragment Sort. Computer-
Aided Design and Computer Graphics, 2009. CAD/Graphics '09. 11th IEEE International
Conference on. (pp. 452-456).
Enderton, E., Sintorn, E., Shirley, P., Luebke, D. (2011). Stochastic Transparency. IEEE
Transactions on Visualization and Computer Graphics. 17(8): p. 157-164.
Salvi, M., Montgomery, J., Lefohn, A. (2011). Adaptive Transparency. in Proceedings of the ACM
SIGGRAPH Symposium on High Performance Graphics (pp. 119-126). New York, NY: ACM
Everitt, C. (2001). Order Independent Transparency with Dual Depth Peeling. Nvidia.
Rose, A. (2014, May 11). Three.js webgl - oit. Retrieved November 17, 2014, from
http://arose.github.io/demo/oit/examples/webgl_oit.html

Contenu connexe

Tendances

Region filling and object removal by exemplar based image inpainting
Region filling and object removal by exemplar based image inpaintingRegion filling and object removal by exemplar based image inpainting
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
 
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONCOLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONecij
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on coloreSAT Journals
 
Edge Representation Learning with Hypergraphs
Edge Representation Learning with HypergraphsEdge Representation Learning with Hypergraphs
Edge Representation Learning with HypergraphsMLAI2
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGKamana Tripathi
 
Contrast limited adaptive histogram equalization
Contrast limited adaptive histogram equalizationContrast limited adaptive histogram equalization
Contrast limited adaptive histogram equalizationEr. Nancy
 
Histogram equalization
Histogram equalizationHistogram equalization
Histogram equalizationtreasure17
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGKamana Tripathi
 
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Habibur Rahman
 
Point processing
Point processingPoint processing
Point processingpanupriyaa7
 
A Survey on Exemplar-Based Image Inpainting Techniques
A Survey on Exemplar-Based Image Inpainting TechniquesA Survey on Exemplar-Based Image Inpainting Techniques
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
 
A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...sipij
 
Digital image processing
Digital image processingDigital image processing
Digital image processingABIRAMI M
 
3.point operation and histogram based image enhancement
3.point operation and histogram based image enhancement3.point operation and histogram based image enhancement
3.point operation and histogram based image enhancementmukesh bhardwaj
 
Log Transformation in Image Processing with Example
Log Transformation in Image Processing with ExampleLog Transformation in Image Processing with Example
Log Transformation in Image Processing with ExampleMustak Ahmmed
 

Tendances (20)

Region filling and object removal by exemplar based image inpainting
Region filling and object removal by exemplar based image inpaintingRegion filling and object removal by exemplar based image inpainting
Region filling and object removal by exemplar based image inpainting
 
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...
 
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATIONCOLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION
COLOUR IMAGE ENHANCEMENT BASED ON HISTOGRAM EQUALIZATION
 
Image segmentation based on color
Image segmentation based on colorImage segmentation based on color
Image segmentation based on color
 
Edge Representation Learning with Hypergraphs
Edge Representation Learning with HypergraphsEdge Representation Learning with Hypergraphs
Edge Representation Learning with Hypergraphs
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
 
Contrast limited adaptive histogram equalization
Contrast limited adaptive histogram equalizationContrast limited adaptive histogram equalization
Contrast limited adaptive histogram equalization
 
Histogram equalization
Histogram equalizationHistogram equalization
Histogram equalization
 
Thesis presentation
Thesis presentationThesis presentation
Thesis presentation
 
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSINGTYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
 
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
Segmentation of Color Image using Adaptive Thresholding and Masking with Wate...
 
Point processing
Point processingPoint processing
Point processing
 
E0212730
E0212730E0212730
E0212730
 
A Survey on Exemplar-Based Image Inpainting Techniques
A Survey on Exemplar-Based Image Inpainting TechniquesA Survey on Exemplar-Based Image Inpainting Techniques
A Survey on Exemplar-Based Image Inpainting Techniques
 
A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...A comparative study of histogram equalization based image enhancement techniq...
A comparative study of histogram equalization based image enhancement techniq...
 
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
Background Subtraction Based on Phase and Distance Transform Under Sudden Ill...
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Image Inpainting
Image InpaintingImage Inpainting
Image Inpainting
 
3.point operation and histogram based image enhancement
3.point operation and histogram based image enhancement3.point operation and histogram based image enhancement
3.point operation and histogram based image enhancement
 
Log Transformation in Image Processing with Example
Log Transformation in Image Processing with ExampleLog Transformation in Image Processing with Example
Log Transformation in Image Processing with Example
 

En vedette

CS 354 Blending, Compositing, Anti-aliasing
CS 354 Blending, Compositing, Anti-aliasingCS 354 Blending, Compositing, Anti-aliasing
CS 354 Blending, Compositing, Anti-aliasingMark Kilgard
 
Collaboration in an HD World (RAX.IO)
Collaboration in an HD World (RAX.IO)Collaboration in an HD World (RAX.IO)
Collaboration in an HD World (RAX.IO)Rainya Mosher
 
9 Semana Ecológica Navarra
9 Semana Ecológica Navarra9 Semana Ecológica Navarra
9 Semana Ecológica NavarraReyno Gourmet
 
BALLUFF - Solutions for Hydraulic Systems
BALLUFF - Solutions for Hydraulic SystemsBALLUFF - Solutions for Hydraulic Systems
BALLUFF - Solutions for Hydraulic SystemsBernd Schneider
 
Bienaventuranzas de María (pps)
Bienaventuranzas de María (pps)Bienaventuranzas de María (pps)
Bienaventuranzas de María (pps)Ramón Rivas
 
II Jornada Farmacoterapia 2015 DAO - Díptico
II Jornada Farmacoterapia 2015 DAO - Díptico II Jornada Farmacoterapia 2015 DAO - Díptico
II Jornada Farmacoterapia 2015 DAO - Díptico FarmaMadridAP Apellidos
 
BIM - CTE - Valores de Impacto Ambiental
BIM - CTE - Valores de Impacto AmbientalBIM - CTE - Valores de Impacto Ambiental
BIM - CTE - Valores de Impacto AmbientalEnrique Cuarental Bolet
 
Design for Cross-Channel Experiences
Design for Cross-Channel ExperiencesDesign for Cross-Channel Experiences
Design for Cross-Channel ExperiencesPeter Morville
 
Reglas Hermandad de Nuuestra Señora de los Dolores de Constantina
Reglas Hermandad de Nuuestra Señora de los Dolores de ConstantinaReglas Hermandad de Nuuestra Señora de los Dolores de Constantina
Reglas Hermandad de Nuuestra Señora de los Dolores de Constantinahermandadlosdolores
 
Barry Callebaut GRI Report 2014/15
Barry Callebaut GRI Report 2014/15Barry Callebaut GRI Report 2014/15
Barry Callebaut GRI Report 2014/15Barry Callebaut
 
Southland colonial pedestrianoverpass_tre0003-g_11x17plans
Southland colonial pedestrianoverpass_tre0003-g_11x17plansSouthland colonial pedestrianoverpass_tre0003-g_11x17plans
Southland colonial pedestrianoverpass_tre0003-g_11x17plansBrendan O'Connor
 
Trabajo de computación (Web 2.0)
Trabajo de computación (Web 2.0)Trabajo de computación (Web 2.0)
Trabajo de computación (Web 2.0)dayagisselle1997
 
Alberto de Rosa. Entrevista en La excelencia de la Medicina en España
Alberto de Rosa. Entrevista en La excelencia de la Medicina en EspañaAlberto de Rosa. Entrevista en La excelencia de la Medicina en España
Alberto de Rosa. Entrevista en La excelencia de la Medicina en EspañaRibera Salud grupo
 
Invest in Dubai ,Villas &condos at akoya
 Invest in Dubai ,Villas &condos at akoya  Invest in Dubai ,Villas &condos at akoya
Invest in Dubai ,Villas &condos at akoya Zayed Home
 

En vedette (20)

CS 354 Blending, Compositing, Anti-aliasing
CS 354 Blending, Compositing, Anti-aliasingCS 354 Blending, Compositing, Anti-aliasing
CS 354 Blending, Compositing, Anti-aliasing
 
Collaboration in an HD World (RAX.IO)
Collaboration in an HD World (RAX.IO)Collaboration in an HD World (RAX.IO)
Collaboration in an HD World (RAX.IO)
 
LOD2 Plenary Meeting 2011: Institute Mihajlo Pupin – Partner Introduction
LOD2 Plenary Meeting 2011: Institute Mihajlo Pupin – Partner IntroductionLOD2 Plenary Meeting 2011: Institute Mihajlo Pupin – Partner Introduction
LOD2 Plenary Meeting 2011: Institute Mihajlo Pupin – Partner Introduction
 
9 Semana Ecológica Navarra
9 Semana Ecológica Navarra9 Semana Ecológica Navarra
9 Semana Ecológica Navarra
 
BALLUFF - Solutions for Hydraulic Systems
BALLUFF - Solutions for Hydraulic SystemsBALLUFF - Solutions for Hydraulic Systems
BALLUFF - Solutions for Hydraulic Systems
 
Paiporta, 17 de junio de 2012
Paiporta, 17 de junio de 2012Paiporta, 17 de junio de 2012
Paiporta, 17 de junio de 2012
 
Bienaventuranzas de María (pps)
Bienaventuranzas de María (pps)Bienaventuranzas de María (pps)
Bienaventuranzas de María (pps)
 
II Jornada Farmacoterapia 2015 DAO - Díptico
II Jornada Farmacoterapia 2015 DAO - Díptico II Jornada Farmacoterapia 2015 DAO - Díptico
II Jornada Farmacoterapia 2015 DAO - Díptico
 
BIM - CTE - Valores de Impacto Ambiental
BIM - CTE - Valores de Impacto AmbientalBIM - CTE - Valores de Impacto Ambiental
BIM - CTE - Valores de Impacto Ambiental
 
Design for Cross-Channel Experiences
Design for Cross-Channel ExperiencesDesign for Cross-Channel Experiences
Design for Cross-Channel Experiences
 
Reglas Hermandad de Nuuestra Señora de los Dolores de Constantina
Reglas Hermandad de Nuuestra Señora de los Dolores de ConstantinaReglas Hermandad de Nuuestra Señora de los Dolores de Constantina
Reglas Hermandad de Nuuestra Señora de los Dolores de Constantina
 
Surrealismo samuel estrada oliver
Surrealismo samuel estrada oliverSurrealismo samuel estrada oliver
Surrealismo samuel estrada oliver
 
Gestión e investigación de fondos antiguos
Gestión e investigación de fondos antiguosGestión e investigación de fondos antiguos
Gestión e investigación de fondos antiguos
 
Barry Callebaut GRI Report 2014/15
Barry Callebaut GRI Report 2014/15Barry Callebaut GRI Report 2014/15
Barry Callebaut GRI Report 2014/15
 
Southland colonial pedestrianoverpass_tre0003-g_11x17plans
Southland colonial pedestrianoverpass_tre0003-g_11x17plansSouthland colonial pedestrianoverpass_tre0003-g_11x17plans
Southland colonial pedestrianoverpass_tre0003-g_11x17plans
 
Seabee eCourier Jan. 14, 2016
Seabee eCourier Jan. 14, 2016Seabee eCourier Jan. 14, 2016
Seabee eCourier Jan. 14, 2016
 
Trabajo de computación (Web 2.0)
Trabajo de computación (Web 2.0)Trabajo de computación (Web 2.0)
Trabajo de computación (Web 2.0)
 
Alberto de Rosa. Entrevista en La excelencia de la Medicina en España
Alberto de Rosa. Entrevista en La excelencia de la Medicina en EspañaAlberto de Rosa. Entrevista en La excelencia de la Medicina en España
Alberto de Rosa. Entrevista en La excelencia de la Medicina en España
 
Lo verosímil
Lo verosímilLo verosímil
Lo verosímil
 
Invest in Dubai ,Villas &condos at akoya
 Invest in Dubai ,Villas &condos at akoya  Invest in Dubai ,Villas &condos at akoya
Invest in Dubai ,Villas &condos at akoya
 

Similaire à WBOIT Final Version

論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy LabelsToru Tamaki
 
Conception_et_realisation_dun_site_Web_d.pdf
Conception_et_realisation_dun_site_Web_d.pdfConception_et_realisation_dun_site_Web_d.pdf
Conception_et_realisation_dun_site_Web_d.pdfSofianeHassine2
 
A Closed-form Solution to Photorealistic Image Stylization
A Closed-form Solution to Photorealistic Image StylizationA Closed-form Solution to Photorealistic Image Stylization
A Closed-form Solution to Photorealistic Image StylizationSherozbekJumaboev
 
Deferred Pixel Shading on the PLAYSTATION®3
Deferred Pixel Shading on the PLAYSTATION®3Deferred Pixel Shading on the PLAYSTATION®3
Deferred Pixel Shading on the PLAYSTATION®3Slide_N
 
Improved Alpha-Tested Magnification for Vector Textures and Special Effects
Improved Alpha-Tested Magnification for Vector Textures and Special EffectsImproved Alpha-Tested Magnification for Vector Textures and Special Effects
Improved Alpha-Tested Magnification for Vector Textures and Special Effectsナム-Nam Nguyễn
 
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional NetworksVisualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional NetworksWilly Marroquin (WillyDevNET)
 
Image Matting via LLE/iLLE Manifold Learning
Image Matting via LLE/iLLE Manifold LearningImage Matting via LLE/iLLE Manifold Learning
Image Matting via LLE/iLLE Manifold LearningITIIIndustries
 
Denoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsDenoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsCSCJournals
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image FusionIJMER
 
Improvement of Image Deblurring Through Different Methods
Improvement of Image Deblurring Through Different MethodsImprovement of Image Deblurring Through Different Methods
Improvement of Image Deblurring Through Different MethodsIOSR Journals
 
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREM ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREijcga
 
Lecture 15 image morphology examples
Lecture 15 image morphology examplesLecture 15 image morphology examples
Lecture 15 image morphology examplesMarwa Ahmeid
 
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162Jose Daniel Ramirez Soto
 
Using A Application For A Desktop Application
Using A Application For A Desktop ApplicationUsing A Application For A Desktop Application
Using A Application For A Desktop ApplicationTracy Huang
 
Segmentation of Images by using Fuzzy k-means clustering with ACO
Segmentation of Images by using Fuzzy k-means clustering with ACOSegmentation of Images by using Fuzzy k-means clustering with ACO
Segmentation of Images by using Fuzzy k-means clustering with ACOIJTET Journal
 

Similaire à WBOIT Final Version (20)

論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels
 
Conception_et_realisation_dun_site_Web_d.pdf
Conception_et_realisation_dun_site_Web_d.pdfConception_et_realisation_dun_site_Web_d.pdf
Conception_et_realisation_dun_site_Web_d.pdf
 
A Closed-form Solution to Photorealistic Image Stylization
A Closed-form Solution to Photorealistic Image StylizationA Closed-form Solution to Photorealistic Image Stylization
A Closed-form Solution to Photorealistic Image Stylization
 
Deferred Pixel Shading on the PLAYSTATION®3
Deferred Pixel Shading on the PLAYSTATION®3Deferred Pixel Shading on the PLAYSTATION®3
Deferred Pixel Shading on the PLAYSTATION®3
 
Improved Alpha-Tested Magnification for Vector Textures and Special Effects
Improved Alpha-Tested Magnification for Vector Textures and Special EffectsImproved Alpha-Tested Magnification for Vector Textures and Special Effects
Improved Alpha-Tested Magnification for Vector Textures and Special Effects
 
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional NetworksVisualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
 
Image Matting via LLE/iLLE Manifold Learning
Image Matting via LLE/iLLE Manifold LearningImage Matting via LLE/iLLE Manifold Learning
Image Matting via LLE/iLLE Manifold Learning
 
Denoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped WindowsDenoising Process Based on Arbitrarily Shaped Windows
Denoising Process Based on Arbitrarily Shaped Windows
 
Log polar coordinates
Log polar coordinatesLog polar coordinates
Log polar coordinates
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image Fusion
 
Improvement of Image Deblurring Through Different Methods
Improvement of Image Deblurring Through Different MethodsImprovement of Image Deblurring Through Different Methods
Improvement of Image Deblurring Through Different Methods
 
robio-2014-falquez
robio-2014-falquezrobio-2014-falquez
robio-2014-falquez
 
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREM ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
 
Lecture 15 image morphology examples
Lecture 15 image morphology examplesLecture 15 image morphology examples
Lecture 15 image morphology examples
 
Matlab abstract 2016
Matlab abstract 2016Matlab abstract 2016
Matlab abstract 2016
 
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162
E4040.2016 fall.cjmd.report.ce2330.jb3852.jdr2162
 
Using A Application For A Desktop Application
Using A Application For A Desktop ApplicationUsing A Application For A Desktop Application
Using A Application For A Desktop Application
 
stylegan.pdf
stylegan.pdfstylegan.pdf
stylegan.pdf
 
G04654247
G04654247G04654247
G04654247
 
Segmentation of Images by using Fuzzy k-means clustering with ACO
Segmentation of Images by using Fuzzy k-means clustering with ACOSegmentation of Images by using Fuzzy k-means clustering with ACO
Segmentation of Images by using Fuzzy k-means clustering with ACO
 

WBOIT Final Version

  • 1. Rendering Complex Models Using Weighted Blended Order-Independent Transparency in WebGL By Samuel Cosgrove, Brock Stoops MAP/COP 3930H Abstract: Attempts at solving the problem of order-independent transparency have found solutions that, while valid and useful, tend to require high power machines that may not fit standard graphics architecture. However, recent breakthroughs in the technique of weighted blended order-independent transparency have opened possibilities of rendering accurate transparency on low-spec hardware and low-spec versions of OpenGL. Implementations of the technique have been successfully demonstrated in WebGL. However, most of these demos only attempt to render quads or simple primitives. We test the efficiency of rendering many models of growing complexity utilizing weighted blended order-independent transparency. We compare the performance and accuracy to standard alpha blended transparency, and attempt to identify contributing factors to render time increase between these methods. 2014
  • 2. P a g e | 2 Introduction One of the properties of the real world that has been a challenge since the inception of graphics study is transparency, specifically the effect of occlusion on objects of varying levels of transparency. The class of techniques for rendering transparency can be divided into two groups: order-dependent strategies such as alpha blending and order-independent transparency (McGuire & Balvoil, 2013, p. 2). The study of order-independent transparency (henceforth abbreviated as OIT) has increased in recent years. But these techniques usually target high-end machines and occasionally utilize non-standard graphics hardware to address the bleeding-edge possibilities. Of note, however, is a recently proposed method of OIT called weighted blended OIT (McGuire & Balvoil, 2013, p. 2). This uses a function related to the distance from the camera to change the values of opaqueness, rendering multiple transparent layers with greater accuracy. Furthermore, this method is compatible with OpenGL variants for embedded systems, including WebGL. Demonstrations have been made utilizing the technique in WebGL, but these have usually attempted to only render simple primitives like quads, cubes, or spheres. We will attempt to render multiple complex models at once utilizing weighted blended OIT (henceforth abbreviated to WBOIT). We will explore traditional methods of transparency rendering, in comparison with WBOIT. We will run simulations with these techniques, rendering transparent models of increasing complexity. We will discuss these results and what observations can be made about what factors impact the processing time. Background A. Alpha Compositing Alpha compositing combines a rendered image with the background to create the appearance of full or partial transparency. It requires an alpha value for each element to determine where color should be drawn and where the element should appear empty (Carnecky, Fuchs, Mehl, Jang, Peikert, 2013, p. 839). Alvy Ray Smith created the alpha channel in the late 1970s to store the alpha information about each pixel. Values between 0 and 1 inclusive were stored for each pixel to denote values of transparency and opaqueness. In our demo each pixel color is displayed as an RGBA tuple which would look like this: (0.25, 0.5, 0.0, 0.5) This represents the pixel having a 25% maximum red intensity, a 50% maximum green intensity and 50% opacity. A value of 0 would represent that color being not visible in the pixel or completely transparent, and a value of 1 would be a maximum value of color or a completely opaque pixel. The alpha channel can express alpha compositing utilizing compositing algebra. The most common operation in compositing algebra is the over operation. This denotes that one image element is in the foreground and the other is in the background. More simply, one is over the other (Bavoil & Myers, 2008, p. 2). The formula below can be used on each pixel to find the result of the image element overlaying. C0 = Ca αa + Cbαb (1 – αa)
  • 3. P a g e | 3 In the above formula CO is the result of the over operation, Ca is the color of pixel a and Cb is the color of pixel b. αa is the alpha of the pixels in element a and αb is the alpha of the pixels in element b (Bavoil & Myers, 2008, p. 4). When dealing with the merging of layers, an associative version of this equation is generally used: CO = (1/αO) [Ca αa + Cbαb(1 – αa)]; αO = αa + αb(1 - αa) B. Alpha Blending Alpha blending is an extension of alpha compositing, combining a translucent foreground color with a background color. These layers combine and blend together to create a new color (Liu, Wei, Xu, Wu, 2009, p. 1). Alpha blending is described by the equation below. DestinationColor.rgb = (SourceColor.rgb*SourceColor.a)+(DestinationColor.rgb*(1-SourceColor.a)) However, this equation requires that source colors are multiplied at the time of pixel calculations, which can be very time inefficient when generating millions of pixels at a time (Salvi, Montgomery, Lefohn, 2011, p. 120). C. PartialCoverage Partial Coverage is a phenomenon when a surface transmits fractions of light without refraction, and that causes the surface to emit or reflect light. Partial coverage is used to emulate different modeling surfaces, such as a wheel modeled as a square. It is used when overlaying multiple layers of the same or similar images, and measuring the net coverage. Figure 1: Demonstration of partial coverage (McGuire & Bavoil, 2013 p. 126). In the above picture there are three screens of identical height and width and a 50% net coverage per screen. In (a), all three screens are perfectly stacked, so the total image has a 50% net coverage. In (b) they are aligned in some random formation. This causes a total net coverage for the image to be somewhere between 50 and 100%. In (c) they are stacked with perfect alignment, causing the total net coverage to be 100% Partial coverage is calculated similarly to alpha blending. Instead of alpha measuring the opacity, it measures net coverage of the layers. As with alpha blending, it requires pre-
  • 4. P a g e | 4 multiplied values for the color of the pixel (Enderton, Sintorn, Shirley, Luebke, 2011 p. 2). This increases rendering speed and avoids color bleeding when α is zero. Pre-multiplied color values allow zero alpha values (no coverage) to still add intensity. Cf = C1 + C0(1 – α1) This equation calculates partial coverage of two colors C1 and C0, where C0 is the background and C1 is the foreground (Enderton et al. 2011 p. 2). Again, in this equation α stands for the percent of the image covered. A value of 1 means 100% of the image is covered. A value of 0.5 means that 50% of the image is covered. The most common partial coverage technique used today is a sorted over compositing, although it is not the most accurate. D. Order-IndependentTransparency OIT differs by not requiring rendering geometry in sorted order(Maule, Comba, Torchelsen, Bastos, 2013, p. 103). Ordering takes up processing time per frame, and does not always produce the most accurate image (Everitt, 2001, 1). OIT is that it would sort the geometry per-pixel after rasterization. The previous equations for partial coverage and alpha blending are not commutative, and therefore order-dependent. Those techniques are calculated and rendered from back to front, layer on top of layer (Maule et al. 2013, p. 103). In order to get an OIT, we would need to equate the values using commutative operations like addition or multiplication. E. Weighted Blended Order-IndependentTransparency The main focus of WBOIT is that, when blending the layers together, different weights are given to colors based on the distance from the camera. This prevents surfaces with similar coverage from overpowering the average color, which happens in other OIT techniques. The following equation was proposed by McGuire and Bavoil in their paper on OIT (2013, p. 128). The weight function w(zi,αi) can be used with any monotonically decreasing non-zero function of z. This function creates an occlusion cue between images and layers, allowing for very accurate transparency. McGuire and Bavoil proposed several different weight functions to attempt the best image accuracy (2013, p. 128). The weight functions require a large variance to compensate for the small values for distance from camera. The designed weight functions are listed below:
  • 5. P a g e | 5 Where the function d(z) in this case is defined by: Where the z values are the distance from the camera (McGuire & Bavoil, 2013, p. 128). Although in some cases the results of the image using WBOIT look similar to the regular alpha blending using the over operator, WBOIT still theoretically runs more efficiently by not requiring an order dependency or any sorting. One of the downfalls of WBOIT is that it is not translation-invariant along the depth axis, which means changing the distance of the image from the camera will change the color of the whole image (McGuire & Bavoil, 2013, p. 128). Method Our WBOIT Environment is an extension of prior work done by Alexander Rose with his single mesh OIT demo (Rose, 2014), itself derivative of the WBOIT method described by McGuire and Bavoil. Our work is adapted to render many objects made of complex meshes. It renders a scene with 64 models, each with a random color, arranged in a 4x4x4 cube. Model and rendering style are determined by on-screen controls. It also allows for real time mouse input for camera control. Due to limitations in Chrome for file reading, the environment is currently only compatible with Firefox. A. WBOIT EnvironmentFile Structure Our environment is arranged like many WebGL projects, with an HTML front end, and script files that utilize GLSL shaders. We designed the file structure and separation to encapsulate functionality for reuse and ease of understanding. Figure 2: File stucture of WBOIT Environment.
  • 6. P a g e | 6 The environment is instantiated and managed with the “mainScript.js” script file. It interfaces with other scripts and files in order to generate the desired environment. Its purpose is further described in Section B. Program Flow. The “lib” directory contains other scripts. Inside is a “renderManager.js” script which encapsulates model loading and rendering. It is further explained in “Section D. Renderer”. We make extensive use of third-party scripts to facilitate prototyping our simulation. The THREE Javascript library, created by Ricardo Cabello, is a commonly used WebGL graphics library. It simplifies major components of setting up and rendering an environment. The “stats.js” script, also developed by Ricardo Cabello, displays an on-screen widget with real-time performance statistics. We modified it slightly to facilitate capturing data for experimental output. “TrackballControls.js”, developed by Eberhard Graether and Mark Lundin, is a commonly used extension for WebGL projects to facilitate camera control. “Detector.js”, developed by Ricardo Cabello and “AlteredQualia”, checks WebGL compatibility of a browser. Our “model” directory contains five model files, in JSON format: “cube”, “teapot”, “house”, “house_of_parliament”, and “dabrovic-sponza”. The “shaders” directory contains all our vertex and fragment shaders in OpenGL Shading Language (GLSL) code. These will be extensively described in Section D. Renderers. B. Model Loading Pipeline Initially, a model name is given to the “loadNewModels()” function. For each position in the 4x4x4 cube, “addModel()” is run, with parameters of position, a random color with alpha value of 0.5, and the model name. The “addModel()” function first runs “modelLoad()” to generate the model as a THREE.Object3D instance. This model is colored, positioned, and added to the scene. The “modelLoad()” function retrieves mesh data from the model JSON file. This is passed to the “generateModelFromInfo()” function to get the THREE.Object3D instance. For our purposes, we scale the model based on the minimum and maximum bounds so it fits into an approximate 1x1x1 square in the world. "meshes": [ { "vertexPositions" : [ -1,0,-1, 1,0,-1, -1,0,1, 1,0,1 ], "vertexNormals" : [0,1,0,0,1,0,0,1,0,0,1,0], "vertexTexCoordinates" : [[0,0,1,0,0,1,1,1]], "indices" : [0,1,2,2,3,1], "materialIndex" : 0 } ], Figure 3: Each model has meshes, each with vertices and indices to determine order of triangles drawn.
  • 7. P a g e | 7 C. Renderers The opaque renderer and transparent renderer are simple renderers, the former for solid color and the latter supporting transparency with standard alpha blending. Below are the rendering materials defined for both. // Standard opaque material material = new THREE.RawShaderMaterial( { vertexShader: parseDoc( './shaders/vertexShader.js' ), fragmentShader: parseDoc( './shaders/fragmentShader.js' ), side: THREE.DoubleSide, transparent: false }); // Standard alpha blended material transparentMaterial = new THREE.RawShaderMaterial( { vertexShader: parseDoc( './shaders/vertexShader.js' ), fragmentShader: parseDoc( './shaders/fragmentShader.js' ), side: THREE.DoubleSide, transparent: true }); Figure 4: The opaque and transparent render materials, instances of THREE.RawShaderMaterial. Note that the THREE library allows for a simple transparency flag to internally support it. Rendering objects with these renderers is as simple as calling a library function from THREE to render. Below are the vertex and fragment shaders that both types of renderers use. // Standard color to pass to the frag shader fragColor = color; //standard calculation of position of vertex, to pass to frag shader gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 ); // Calculate depth frag coordinate, relative to the window vec4 fragCoord = modelViewMatrix * vec4( position, 1.0 ); fragZ= fragCoord.z; Figure 5: Main function for standard vertex shader. vec4 color = vec4( fragColor ); gl_FragColor = color; Figure 6: Main function for standard fragment shader. The WBOIT process is more intensive. It requires a three-pass algorithm. One pass renders accumulation color to a texture. One renders “revealage”, or the opposite of coverage, to another texture (McGuire & Bavoil 2013, p. 129). A final pass composites these two textures into the final rendered frame.
  • 8. P a g e | 8 Below is the function in “renderManager.js” that renders a frame with WBOIT. Note that after the accumulation and revealage textures are made, the textures are passed into the compositing uniforms before the composite render pass. renderer.clearColor(); // Render accumulation texture scene.overrideMaterial = accumulationMaterial; renderer.render( scene, camera, accumulationTexture ); // Render revealage texture scene.overrideMaterial = revealageMaterial; renderer.render( scene, camera, revealageTexture ); // Add textures to compositing shaders compositingUniforms[ "texAccum" ].value = accumulationTexture; compositingUniforms[ "texReveal" ].value = revealageTexture; // Render composited frame renderer.render( compositeScene, compositeCamera ); scene.overrideMaterial = null; Figure 7: WBOIT render function code. The accumulation and revealage render passes each utilize the standard vertex shader, but the fragment shaders are unique. Below is the code for these shaders. They share the same weight function for alpha values, to facilitate the summation process described for WBOIT. float alpha = fragColor.a; // Scale color based on alpha value vec3 Ci = fragColor.rgb * alpha; // Further scale color and alpha based on weighted alpha gl_FragColor = vec4( Ci, alpha ) * w( alpha ); Figure 8: Main function for fragment shader to create accumulation texture. float alpha = fragColor.a; // Calculates alpha based on the weighted alpha value of // the coordinate gl_FragColor = vec4( vec3( w( alpha ) ), 1.0 ); Figure9: Main function for fragment shader to creat revealage texture.
  • 9. P a g e | 9 float colorResistance = 1.0; float rangeAdjustmentsClampBounds = 10.0; float depth = abs( fragZ ); float orderingDiscrimination = 200.0; float orderingStrength = 5.0; float minValue = 1e-2; float maxValue = 3e3; return pow( a, colorResistance ) * clamp( rangeAdjustmentsClampBounds / ( 1e-5 + (depth/ orderingStrength) + (depth / orderingDiscrimination) ), minValue, maxValue ); Figure 10: Weight function used by the fragment shaders. Takes in alpha value a and scales based on distance from camera. texCoords = uv; gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 ); Figure 11: Main function for composite frame vertex shader. Below are the vertex and fragment shaders for this process, which composites the RGB and alpha values from the textures into the final rendered frame. // Color from accumulation texture at coordinates vec4 accum = texture2D( texAccum, texCoords ); // Alpha from reveal texture at coordinates float reveal = texture2D( texReveal, texCoords ).r; // Composites above to calculate color at fragment gl_FragColor = vec4( accum.rgb / clamp( accum.a, 1e-9, 5e9 ), reveal ); Figure 12: Main function for composite frame fragment shader; composites color from accumulation texture and alpha from revealage texture. D. ExperimentalProcedure All of our simulations were performed on a laptop running a 2.0 GHz Intel Core i7 2635QM CPU, with four independent processor cores. The GPU is an AMD Radeon HD 6490M with 256 MB of dedicated GDDR5 memory. The hard drive is a Seagate 1 TB Solid State Hybrid Drive SATA 6Gbps 64MB. We had a total of 15 experimental conditions, all combinations of model type(Cube, Teapot, House, Parliament, and Sponza) and rendering type (WBOIT, opaque, and transparent). For each condition, we recorded the MPF (henceforth abbreviated to MPF) over the course of a
  • 10. P a g e | 10 minute. The scene being rendered was rotated in an arbitrary manner for the length of the recording. The output was transferred to spreadsheets for further analysis. Results Below are the results for the rendering of the Cube model, along with an image of both the WBOIT(left) and the regular alpha blending transparency(right). Figure 13: Cube scene MPF over a minute. Figure 14: Cubes rendered with WBOIT (14a, left) and regular transparency (14b, right). Figure 14a, when compared to Figure 14b, looks smoother, with each individual layer visible. Figure 14b loses transparency information when many models are stacked. Each layer is clearly visible in the WBOIT technique. Though the transparency has much more depth in the WBOIT method, it comes at a notable cost of efficiency. The average MPF are noticeably larger than regular transparency. The graph shows that the normal range of frames per second falls between 60 and 80 with an average amount of 68.188 fps. The transparency as shown in Figure 13 is between 50 and 70 for an average amount of 54.838. The difference in the techniques yields an approximate 20% decrease in efficiency.
  • 11. P a g e | 11 Figure 15: Teapot scene MPF over a minute. Figure 16: Teapots rendered with WBOIT (16a, left) and regular transparency (16b, right). Figure 15 displays a continuing trend; WBOIT in our simulation is less efficient than standard transparency. Though the teapot model is more complex than a cube, the efficiency and speed of creating the scene is very similar. The WBOIT teapot scene takes somewhere between 60 and 80 milliseconds to render the frame with an average value of 66.89 milliseconds. Standard transparency takes between 45 and 65 with an average value of 55.62 milliseconds to render a frame. This makes the regular transparency technique in this case approximately 17% faster for this model type. Of note is that teapot transparency model looks very close to the WBOIT model. Figure 16b shows many depth layers of teapots stacked on top of each other with even the furthest most layer being clearly visible. WBOIT still provides a higher degree of information, as shown in 16a.
  • 12. P a g e | 12 Figure 17: House scene MPF over a minute. Figure 18: Houses rendered with WBOIT (18a, left) and regular transparency (18b, right). Figure 17 shows the same trend of WBOIT being less time efficient. For the house scene, the difference in performance increases. In Figure 17 we can see WBOIT hovering around the 120 to 140 range, with only a few outliers going below 100 MPF. This gives us an average time of 116.06 milliseconds taken to render a frame for the model shown in Figure 18a. Standard transparency hovers between 75 and 100, barring a few outliers. The average time in our recordings was 83.24 milliseconds taken to render a frame. This is a decrease in efficiency around 30%.
  • 13. P a g e | 13 Figure 19: Parliament scene MPF over a minute. Figure 20: Parliament models rendered with WBOIT (20a, left) and regular transparency (20b, right). The graph in Figure 19 demonstrates how the interval between frames is growing. The Parliament model, as shown above in Figure 20a and 20b, is the most complex model tested yet. Since the Parliament model is very complex, with over 80 meshes of varying complexity, fewer data points could be generated due to the slow render of each frame. In the first three models we had over 1000 data points in each minute of testing, but the Parliament model only had 200. The WBOIT data line is hovering around the 350 millisecond mark, with about a 25 millisecond leeway in each direction. By far this is the slowest model render thus far. Standard transparency hovers between the 150 and 200 millisecond range. We have an average value of
  • 14. P a g e | 14 348.17 for WBOIT and 174.92 for the regular transparency technique. This results in standard transparency rendering 50% faster than the WBOIT technique. Figure 21: Sponza scene MPF over a minute. Figure 22: Sponza models rendered with WBOIT (22a, left) and regular transparency (22b, right). Figure 21 shows the data for the Sponza model, the slowest performing model. This is likely due to the large number of vertices. The values of WBOIT are the slowest yet, hovering between 1500 and 2000 MPF. Standard transparency stayed between 700 and 1300 throughout the recording process. The Sponza model only gave around 30 data points in the whole minute of recording for WBOIT, taking almost two whole seconds to render a single frame. Average time for WBOIT was 1733 and transparency was 833. In this model the transparency technique rendered over twice as fast as the WBOIT method. The color difference in the different techniques is most noticeable in this model as we can clearly see all of the individual layers in the WBOIT shown in Figure 22a. With standard transparency in 22b, it is hard to see the models behind the closet rendered one. The hollowed portion of the model is the only way to notice the color differences of the models behind it.
  • 15. P a g e | 15 Figure 23: Measure of average MPF against number of vertices. Figure 23 shows the relation between number of vertices and MPF. We can see as the number of vertices increases the time it takes in MPF also increases. Figure 23 shows the slower increase in amount of MPF for transparency compared with the WBOIT method, which explains why the distance between the data points increased with the more complex models. Both transparency techniques most resemble a second order polynomial equation. The WBOIT line is represented by the polynomial regression: y = 0.0000002x2 + 0.013x + 77.723 The regular transparency is represented by the polynomial regression: y = 0.0000001x2 + 0.0051x + 60.718
  • 16. P a g e | 16 Figure 24: Measure of average MPF against number of triangles. The graph in Figure 24 shows the relation to how fast the frames were rendered vs how many triangles were drawn for the model. As the number of vertices increases the time it takes in MPF also increases. A second order polynomial regression seems to be the best fit for both trend lines. The WBOIT line is represented by the polynomial regression: y = -0.0000004x2 + 0.0538x + 65.406 The regular transparency is represented by the polynomial regression: y = -0.0000002x2 + 0.0226x + 55.37 The amount of triangles compared to the time taken to render the frame according to our data has a negative coefficient for the highest order polynomial, meaning that at some point there would be a limiting value for the MPF. It did not seem like our date led to this point given the models we tested, but in order to get the most accurate representation we would need to have a larger number of different models. This model might not even be accurate because of that lack of data points.
  • 17. P a g e | 17 Figure 25: Measure of average ms/frame against number of meshes. Because there is a jump at the data point for the Sponza model, it seems to indicate there is no correlation with time complexity and the number of meshes. The graph in Figure 25 show the relation of time taken to render a frame compared against how many meshes the model has. The outlier point corresponds to the Sponza model, which has 38 meshes, less than the house’s 85. Yet the Sponza model had larger MPF values. This seems to indicate that the number of meshes is not a measure that correlates with frame rendering. Conclusion WBOIT has significant performance issues with rendering many transparent models of large complexity. It is a noticeable difference compared to alpha blending transparency technique. The loss in the speed of rendering increases as the model gets more complex. The MPF for the WBOIT simulation we performed can be represented by the following equation: y = 0.0000002x2 + 0.013x + 77.723 where x is the amount of vertices. By comparison, the MPF for alpha blending in our simulation is modeled by the following: y = 0.0000001x2 + 0.0051x + 60.718. Both equations are second order so we can conclude they will get slower quicker as the complexity of the model increases. But the coefficient for the highest order term is half the size for alpha blending, indicating a greater increase for WBOIT as number of vertices increases. The results we have found suggest that, while WebGL is fully capable of rendering WBOIT objects, even complex ones, the time complexity grows faster than it would for standard transparency techniques. WebGL version 2.0 is based on an older specification of OpenGL for Embedded Systems (2.0 vs current 3.1), so there are features that are not currently available to WebGL that could help in improving this algorithm. But for now, we conclude that WBOIT is feasible to use in WebGL, but only sparingly. When creating scenes in WebGL with large amounts of complex models, standard alpha blending is more efficient. When rendering graphics in a web browser, or with a low powered graphics card, the increase in accuracy of layered transparency does not make up for the up to 50%+ performance decrease. When deciding between how to render graphics in this
  • 18. P a g e | 18 environment, it is important to weigh the cost of efficiency with the increased smoothness of the graphics. Future Research As we proceeded with developing this program, we noticed our experimental design could have been better. Our descriptions of color and shape accuracy are currently subjective observations. We will seek better definitions of these properties from prior literature to make any future analysis more quantitative. We would also simulate a greater spectrum of vertex inputs to get a more continuous model to reference, rather than relying on a questionable regression. Our experimental procedure could have been more controlled. Currently, we leave a chance of user error for starting/stopping data recording and unpredictable mouse movement of camera to “tax” the rendering process. In the future we would automate and control movement of the camera and precisely stop data collection after a user set time. Javascript and HTML are constantly refreshing standards, which recently have made file output difficult with the standard API due to security concerns. This is the reason why we output log data to a text area in our current implementation. Future research in effective log file output will be done for the future We initially intended to allow rendering opaque and non-opaque objects using WBOIT, or even rendering some objects with alpha transparency, WBOIT, and opaque in the same scene. Rendering alpha transparent and opaque objects is already doable in our implementation by making alpha value equal 1. However, a combination of limitations of the algorithm, and our current pipeline with THREE, made it difficult to render opaque objects with WBOIT. WBOIT shading seems to be unable to render fully opaque objects without making them transparent. This is actually a simple fix: McGuire and Bavoil in their own code snippets require opaque objects render before any WBOIT transparent objects (2014, p. 131). All we would need to do is the following: 1. Instead of changing materials globally for the entire scene per pass, we would need to swap material on a per-model basis 2. We would render all opaque objects as a preliminary pass. 3. We would then render the transparent objects with WBOIT. One of the reasons we did not implement this, however, is this per-model architecture would have changed our time complexity from the data we had already started collecting. So we left this for future studies to implement. Due to funding constraints, and wanting to benchmark for lower-spec hardware, we could only run simulations on the aforementioned laptop. In future studies, we would utilize multiple platforms for benchmarking to determine the role of CPU/GPU specifications in the algorithm’s performance.
  • 19. P a g e | 19 References McGuire, M., & Bavoli, L. ()2013. Weighted Blended Order-Independent Transparency. Journal of Computer Graphics Techniques. 2(2): p. 122-141. Maule, M., Comba, J., Torchelsen, R., Bastos, R. (2013). Hybrid Transparency. in Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (pp. 103-118). New York, NY: ACM Carnecky, R., Fuchs, R., Mehl, S., Jang, Y., Peikert, R. (2013). Smart Transparency for Illustrative Visualization of Complex Flow Surfaces. IEEE Transactions on Visualization and Computer Graphics. 19(5): p. 103-118. Bavoli, L., & Myers, K. (2008). Order Independent Transparency with Dual Depth Peeling. Nvidia. Liu, B., Wei, L., Xu, Y., Wu, E. (2009). Multi-Layer Depth Peeling via Fragment Sort. Computer- Aided Design and Computer Graphics, 2009. CAD/Graphics '09. 11th IEEE International Conference on. (pp. 452-456). Enderton, E., Sintorn, E., Shirley, P., Luebke, D. (2011). Stochastic Transparency. IEEE Transactions on Visualization and Computer Graphics. 17(8): p. 157-164. Salvi, M., Montgomery, J., Lefohn, A. (2011). Adaptive Transparency. in Proceedings of the ACM SIGGRAPH Symposium on High Performance Graphics (pp. 119-126). New York, NY: ACM Everitt, C. (2001). Order Independent Transparency with Dual Depth Peeling. Nvidia. Rose, A. (2014, May 11). Three.js webgl - oit. Retrieved November 17, 2014, from http://arose.github.io/demo/oit/examples/webgl_oit.html