Unity 5.x Shaders and Effects Cookbook

Чтобы посмотреть этот PDF файл с форматированием и разметкой, скачайте его и откройте на своем компьютере.
Unity 5.x Shaders and
Effects Cookbook
Master the art of Shader programming to bring life to your
Unity projects
Unity 5.x Shaders and Effects Cookbook
Copyright © 2016 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, without the prior written permission of the publisher,
except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the
information presented. However, the information contained in this book is sold without
warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers
and distributors will be held liable for any damages caused or alleged to be caused directly or
indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies
and products mentioned in this book by the appropriate use of capitals. However, Packt
Publishing cannot guarantee the accuracy of this information.
First Published: February 2016
Production reference: 1220216
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
ISBN 978-1-78528-524-0
Kenneth Lammers
Kenneth Lammers
Commissioning Editor
Priya Singh
Acquisition Editors
Erol Staveley
Content Development Editor
Mehvash Fatima
Technical Editors
Pranil Pathare
Copy Editor
Tasneem Fatehi
Project Coordinator
Kirk D'Penha
Production Coordinator
Nilesh Mohite
Cover Work
Nilesh Mohite
About the Authors
is a passionate developer, author, and motivational speaker, recognized as
one of Develop's "30 under 30." His expertise has been built over the past 10 years, while he
dedicated his time to academia and the gaming industry. He started his independent career
to fully explore his creativity, tearing down the wall between art and gaming. Prior to that, he
worked at Imperial College London, where he discovered his passion for teaching and writing.
His titles include the gravity puzzle,
, and the upcoming time travel platformer,
Still Time
eBooks, discount offers, and more
Did you know that Packt offers eBook versions of every book published, with PDF and ePub
�les available? You can upgrade to the eBook version at
book customer, you are entitled to a discount on the eBook copy. Get in touch with us at
for more details.
, you can also read a collection of free technical articles, sign up for
a range of free newsletters and receive exclusive discounts and offers on Packt books and
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book
library. Here, you can search, access, and read Packt's entire library of books.
Why subscribe?
Fully searchable across every book published by Packt
Copy and paste, print, and bookmark content
On demand and accessible via a web browser
Table of Contents
Chapter 1: Creating Your First Shader
Creating a basic Standard Shader
Migrating Legacy Shaders from Unity 4 to Unity 5
Adding properties to a shader
Using properties in a Surface Shader
Chapter 2: Surface Shaders and Texture Mapping
Diffuse shading
Using packed arrays
Adding a texture to a shader
Scrolling textures by modifying UV values
Creating a transparent material
Packing and blending textures
Creating a circle around your terrain
Chapter 3: Understanding Lighting Models
Creating a custom diffuse lighting model
Creating a Toon Shader
Chapter 4: Physically Based Rendering in Unity 5
Chapter 10: Advanced Shading Techniques
Using CgInclude �les that are built into Unity
Making your shader world modular with CgInclude
Implementing a Fur Shader
Implementing heatmaps with arrays
Unity 5.x Shaders and Effects Cookbook
is your guide to becoming familiar with the creation
of shaders and post effects in Unity 5. You will start your journey at the beginning, creating
the most basic shaders and learning how the shader code is structured. This foundational
knowledge will arm you with the means to progress further through each chapter, learning
advanced techniques such as volumetric explosions and fur shading. This edition of the book
is written speci�cally for Unity 5 and will help you to master physically-based rendering and
global illumination to get as close to photorealism as possible.
By the end of each chapter, you will have gained new skill sets that will increase the quality of
your shaders and even make your shader writing process more ef�cient. These chapters have
been tailored so that you can jump into each section and learn a speci�c skill from beginner
to expert. For those who are new to shader writing in Unity, you can progress through each
chapter, one at a time, to build on your knowledge. Either way, you will learn the techniques
that make modern games look the way they do.
Once you have completed this book, you will have a set of shaders that you can use in your
Unity 3D games as well as the understanding to add to them, accomplish new effects, and
address performance needs. So let's get started!
What this book covers
Creating Your First Shader
, introduces you to the world of shader coding in
Unity 4 and 5.
Chapter 2
Surface Shaders and Texture Mapping
, covers the most common and useful
techniques that you can implement with Surface Shaders, including how to use textures and
normal maps for your models.
Understanding Lighting Models
, gives you an in-depth explanation of how shaders
model the behavior of light. The chapter teaches you how to create custom lighting models
used to simulate special effects such as toon shading.
Physically Based Rendering in Unity 5
, shows you that physically-based rendering
is the standard technology used by Unity 5 to bring realism to your games. This chapter
explains how to make the most out of it, mastering transparencies, re�ective surfaces, and
Vertex Functions
, teaches you how shaders can be used to alter the geometry of an
object; this chapter introduces vertex modi�ers and uses them to bring volumetric explosions,
snow shaders, and other effects to life.
Fragment Shaders and Grab Passes
, explains how to use grab passes to make
materials that emulate the deformations generated by these semi-transparent materials.
Chapter 7
Mobile Shader Adjustment
, helps you optimize your shaders to get the most out of
any devices.
Screen Effects with Unity Render Textures
, shows you how to create special effects
and visuals that would otherwise be impossible to achieve.
Chapter 9
Gameplay and Screen Effects
, tells you how post-processing effects can be used to
complement your gameplay, simulating, for instance, a night vision effect.
Chapter 10
Advanced Shading Techniques
, introduces the most advanced techniques in this
book, such as fur shading and heatmap rendering.
What you need for this book
The following is a list of the required and optional software to complete the recipes in
Unity 5
A 3D application such as Maya, Max, or Blender (optional)
A 2D image editing application such as Photoshop or Gimp (optional)
Who this book is for
This book is written for developers who want to create their �rst shaders in Unity 5 or wish to
take their game to a whole new level by adding professional post-processing effects. A solid
understanding of Unity is required.
In this book, you will �nd several headings that appear frequently (Getting ready, How to do it,
How it works, There's more, and See also).
To give clear instructions on how to complete a recipe, we use these sections as follows:
When we wish to draw your attention to a particular part of a code block, the relevant lines or
items are set in bold:
Warnings or important notes appear in a box like this.
Tips and tricks appear like this.
Reader feedback
Feedback from our readers is always welcome. Let us know what you think about this
book—what you liked or disliked. Reader feedback is important for us as it helps us develop
titles that you will really get the most out of.
To send us general feedback, simply e-mail
book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or
contributing to a book, see our author guide at
Customer support
Now that you are the proud owner of a Packt book, we have a number of things to help you to
get the most from your purchase.
Downloading the example code
You can download the example code �les for this book from your account at
. If you purchased this book elsewhere, you can visit
and register to have the �les e-mailed directly
to you.
You can download the code �les by following these steps:
Log in or register to our website using your e-mail address and password.
Hover the mouse pointer on the
tab at the top.
Code Downloads & Errata
Enter the name of the book in the
Select the book for which you're looking to download the code �les.
Choose from the drop-down menu where you purchased this book from.
Code Download
Once the �le is downloaded, please make sure that you unzip or extract the folder using the
latest version of:
WinRAR / 7-Zip for Windows
Zipeg / iZip / UnRarX for Mac
7-Zip / PeaZip for Linux
Downloading the color images of this book
We also provide you with a PDF �le that has color images of the screenshots/diagrams used
in this book. The color images will help you better understand the changes in the output. You
can download this �le from
Although we have taken every care to ensure the accuracy of our content, mistakes do happen.
If you �nd a mistake in one of our books—maybe a mistake in the text or the code—we would be
grateful if you could report this to us. By doing so, you can save other readers from frustration
and help us improve subsequent versions of this book. If you �nd any errata, please report them
by visiting
, selecting your book, clicking on
Errata Submission Form
link, and entering the details of your errata. Once your errata are
veri�ed, your submission will be accepted and the errata will be uploaded to our website or
added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to
and enter the name of the book in the search �eld. The required
information will appear under the
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At
Packt, we take the protection of our copyright and licenses very seriously. If you come across
any illegal copies of our works in any form on the Internet, please provide us with the location
address or website name immediately so that we can pursue a remedy.
with a link to the suspected pirated
We appreciate your help in protecting our authors and our ability to bring you valuable
If you have a problem with any aspect of this book, you can contact us at
, and we will do our best to address the problem.
Creating Your First
This chapter will cover some of the more common diffuse techniques found in today's Game
Development Shading Pipelines. In this chapter, you will learn about the following recipes:
Creating a basic Standard Shader
Migrating Legacy Shaders from Unity 4 to Unity 5
Adding properties to a shader
Using properties in a Surface Shader
Let's imagine a cube that has been painted white uniformly. Even if the color used is the
same on each face, they will all have different shades of white depending on the direction
that the light is coming from and the angle that we are looking at it. This extra level of realism
is achieved in 3D graphics by
, special programs that are mostly used to simulate
how light works. A wooden cube and a metal one may share the same 3D model, but what
makes them look different is the shader that they use. Recipe after recipe, this �rst chapter
will introduce you to shader coding in Unity. If you have little to no previous experience with
shaders, this chapter is what you need to understand what shaders are, how they work, and
how to customize them.
By the end of this chapter, you will have learned how to build basic shaders that perform
basic operations. Armed with this knowledge, you will be able to create just about any
Surface Shader.
Creating a basic Standard Shader
Every Unity
game developer should be familiar with the concept of
objects that are part of a game contain a number of components that affect their look and
behavior. While
determine how
objects should behave,
decide how they
should appear on the screen. Unity
comes with several renderers, depending on the type of
object that we are trying to visualise; every 3D model typically has
should have only one renderer, but the renderer itself can contain several
material is a wrapper for a single shader, the �nal ring in the food chain of 3D graphics. The
relationships between these components
can be seen in the following diagram:
Understanding the difference between these components is essential to understand how
shaders work.
If you are using the Unity project that came with the cookbook,
you can skip to step 4.
Rename the folder that you created to
by right-clicking on it and selecting
from the drop-down list or selecting the folder and hitting
on the keyboard.
Create another folder and rename it to
folder and select
folder and select
Rename both the shader and material to
(the default script editor
for Unity) by double-clicking on it. This will automatically launch the editor for you and
display the shader code.
You will see that Unity has already populated our shader with
some basic code. This, by default, will get you a basic Diffuse
shader that accepts one texture. We will be modifying this base
code so that you can learn how to quickly start developing your
own custom shaders.
Now let's give our shader a custom folder from which it's selected. The very �rst
line of code in the shader is the custom description that we have to give the shader
so that Unity can make it available in the shader drop-down list when assigning
to materials. We have renamed our path to
, but you can name it to whatever you want and rename it at
any time. So don't worry about any dependencies at this point. Save the shader in
MonoDevelop and return to the Unity editor. Unity will
automatically compile the
shader when it recognizes that the �le has been updated. This is what your shader
should look like at this point:
Technically speaking, this is a
Surface Shader
physically-based rendering
Unity 5 has adopted as its new standard. As the
shader achieves realism by simulating how light physically behaves when hitting
objects. If you are using a previous version of Unity (such as Unity 4), your code will
look very different. Prior to the introduction of physically-based shaders, Unity 4
used less sophisticated techniques. All these different types of shader will be further
explored in the next chapters of this book.
After your shader is created, we need to connect it to a material. Select the material
that we created
in step 4 and look at the
From the
drop-down list, select
. (Your
shader path might be different if you chose to use a different path name.) This will
assign your shader to your material and make it ready for you to assign to an object.
To assign a material to an object, you can simply click and drag
your material from the
tab to the object in your scene.
You can also drag a material on to the
object in the Unity editor to assign a material.
The screenshot of an example is as follows:
Not much to look at at this point, but our shader development environment is set up and we
can now start to modify the shader to suit our needs.
How it works…
Unity has made the
task of getting your shader environment up and running, which is very
easy for you. It is simply a matter of a few clicks and you are good to go. There are a lot of
elements working in the background with regard to the Surface Shader itself. Unity has taken
the Cg shader language and made it more ef�cient to write by doing a lot of the heavy Cg
code lifting for you. The Surface Shader language is a more component-based way of writing
shaders. Tasks such as processing your own texture coordinates and transformation matrices
have already been done for you, so you don't have to start from scratch any more. In the
past, we would have to start a new shader and rewrite a lot of code over and over again. As
you gain more experience with Surface Shaders, you will naturally want to explore more of
the underlying functions of the Cg language and how Unity is processing all of the low-level
) tasks for you.
All the �les in a Unity project are referenced independently from the
folder that they are in. We can move shaders and materials from
within the editor without the risk of breaking any connection. Files,
however, should never be moved from outside the editor as Unity
will not be able to update their references.
So, by simply changing the shader's path name to a name of our choice, we have got our basic
Diffuse shader working in the Unity environment, with lights and shadows and all that by just
The source code of the built-in shaders is typically hidden in Unity 5. You cannot open this
from the editor like you do with your own shaders.
For more information on where to �nd a large portion of the built-in Cg functions for Unity,
go to your Unity install directory and navigate to
this folder, you can �nd the source code of the shaders shipped with
Unity. Over time, they
have changed a lot;
Chapter 10
Advanced Shading Techniques
, will explore in-depth how to use GcInclude for a
modular approach to shader coding.
Migrating Legacy Shaders from Unity 4
videogames have changed massively over the last 10 years.
Every new game comes with cutting-edge techniques that are getting us closer to achieving
real-time photorealism. We should not be surprised by the fact that shaders themselves
have changed massively throughout the lifetime of Unity. This is one of the major sources of
confusion when approaching shaders for the �rst time. Prior to Unity 5, mainly two different
shaders were adopted:
. As the names suggest, they were used for
matte and
shiny materials, respectively. If you are already using Unity 5, you can skip this
recipe. This recipe will explain how to replicate these effects using Unity 5.
in the later stages of development with an older version, you should be very careful before
migrating. Many things have changed behind the curtains of the engine, and even if your
built-in shaders will most likely work without any problem, your scripts might not. If you are to
migrate your entire workspace, the �rst thing that you should do is take backup. It is important
to remember that saving assets and scenes is not enough as most of the con�guration in
Unity is stored in its metadata �les. The safest option to survive a migration is to duplicate the
entire folder that contains your project. The best way of doing this is by physically copying the
folder from File Explorer (Windows) or Finder (Mac).
How to do it...
There are two main options if you want to migrate your built-in shaders: upgrading your project
automatically or switching to Standard Shaders instead.
Upgrading automatically
easiest one. Unity 5 can import a project made with an earlier version and
upgrade it. You should notice that once the conversion is done, you will not be able to use
Unity 4; even if none of your assets may have changed directly, Unity metadata has been
converted. To proceed with this, open Unity 5 and click on
to select the folder
of your old project. You will be asked if you want to convert it; click on
to proceed.
Unity will reimport all of your assets and recompile all of your scripts. The process might last
for several hours if your project is big. Once the conversion is done, your built-in shaders from
Unity 4 should have been replaced with their legacy equivalent. You can check this from the
inspector of your materials that should have changed (for instance) from
Bumped Diffuse
Legacy Shader/Bumped Diffuse
Even if Diffuse, Specular, and the other built-in shaders from
Unity 4 are now deprecated, Unity 5 keeps them for backward
compatibility. They can be found in the
drop-down menu of a
material under the
Legacy Shaders
Using Standard Shaders
Instead of using the
Legacy Shaders, you might decide to replace them with the new Standard
Shaders from Unity 5. Before doing this, you should keep in mind that as they are based on a
different lighting model, your materials will most
likely look different. Unity 4 came with more
than eighty different built-in shaders divided in six different families (Normal, Transparent,
Transparent Cutout, Self-Illuminated, and Re�ective). In Unity 5, they are all replaced by the
Standard Shader introduced in the previous recipe. Unfortunately, there is no magic recipe to
convert your shaders directly. However, you can use the following table as a starting point to
understand how the Standard Shader can be con�gured to simulate Unity 4 Legacy Shaders:
Unity 4
Unity 4 (Legacy)
Unity 5
Legacy Shader/Diffuse
Physically-based rendering:
Metallic Workflow
Standard (Specular setup)
Physically-based rendering:
Specular Workflow
Unity 4
Unity 4 (Legacy)
Unity 5
Legacy Shader/Transparent
Rendering Mode: Transparent
Cutout Vertex-Lit
Legacy Shader/Transparent
Cutout Vertex-Lit
Rendering Mode: Cutout
You can change the shader used by your old material using the
drop-down menu in
. All you need to do is simply select the appropriate Standard Shader. If your old
shader used textures, colours, and normal maps, they will be automatically used in the new
Standard Shader. You might still have to con�gure the parameters of the Standard Shader
to get as close
to your original lighting model as possible. The following picture shows the
Stanford bunny
Diffuse Shader (right), converted Standard
Shader (left), and Standard Shader with
set to zero (middle):
Migrating custom shaders
If you have written
custom shaders in Unity 4, chances are that
this will work straightaway in
Unity 5. Despite this, Unity has made some minor changes in the way shaders work, which can
cause both errors and inconsistencies. The most relevant and important one is the intensity
of the light. Lights in Unity 5 are twice as bright. All the Legacy Shaders have been rewritten
to take this into account; if you have upgraded your shaders or switched to Standard Shaders,
you will not notice any difference. If you have written your own lighting model, you will have to
be sure that the intensity of the light is not multiplied by two any more. The following code is
used to ensure this:
If you haven't
written a shader yet, don't panic: lighting models
will be extensively explained in
Chapter 3
Understanding Lighting Models
There are several other
changes in the way Unity 5 handles
shaders compared to Unity 4. You can see all of them in
in Unity 5.0
How it works...
Writing shaders is always a trade-off between realism and ef�ciency; realistic shaders require
intensive computation, potentially introducing a signi�cant lag. It's important to use only
those effects that are strictly required: if a material does not need specular re�ections, then
there is no need to use a shader that calculates them. This has been the main reason why
Unity 4 has been shipped with so many different shaders. The new Standard Shader of Unity
5 can potentially replace all of the previous shaders as it incorporates normal mapping,
transparency, and re�ection. However, it has been cleverly optimized so that only the effects
that are really necessary are calculated. If your standard material does not have re�ections,
they will not be calculated.
Despite this, the Standard Shader is mainly designed for realistic materials. The Legacy
Diffuse and Specular shaders, in comparison, were not really designed for realistic materials.
This is the reason switching from Legacy to Standard Shaders will mostly introduce slight
changes in the way your objects are rendered.
Chapter 3
Understanding Lighting Models
, explores in-depth how the Diffuse
and Specular shaders work. Even if deprecated in Unity 5, understanding them is
essential if you want to design new lighting models.
Chapter 4
Physically Based Rendering in Unity 5
, will show you how to unlock the
potential of the Standard Shader in Unity 5.
Adding properties to a shader
Properties of a
shader are very important for the shader pipeline as they are the method that
you use to let the artist or user of the shader assign textures and
tweak your shader values.
Properties allow you to expose GUI elements in a material's
tab without you having
to use a separate editor, which provides visual ways to tweak a shader.
With your shader opened in MonoDevelop, look at the block of lines 2 through 7. This is
block. Currently, it will have one property in it called
If you look at your material that has this shader applied to it, you will notice that there is one
GUI element for us.
Again, Unity
has made this process very ef�cient in terms of coding and the amount of time it
takes to iterate through changing your properties.
You can give a friendlier name to your shader in its �rst line. For
tells Unity to call this
shader and move it
to a group called
. If you duplicate a shader
, your new �le will share the same name. To avoid
confusion, make sure to change the �rst line of each new shader
so that it uses a unique alias.
How to do it…
shader is ready, we can start changing its properties:
block of our shader, remove the current property by deleting the
following code from our current shader:
As we have removed an essential property, this shader will not compile until the other
references to
are removed. Let's remove this other line:
to color the model. Let's change this by replacing
the �rst line of code of the
When you save and return to Unity, the shader will compile, and you will see that now
our material's
tab doesn't have a texture swatch anymore. To complete
the re�t of this shader, let's add one more property and see what happens. Enter the
following code:
We have added another color swatch to the material's
tab. Now let's add
one more to get a feel for other kinds of properties that we can create. Add the
following code to the
We have now
created another GUI element that
allows us to visually interact with our
shader. This time, we created a slider with the name
, as shown in the
following screenshot:
Properties allow you to create a visual way to tweak shaders without having to change values
in the shader code itself. The next recipe will show you how these properties can actually be
used to create a more interesting shader.
While properties belong to shaders, the values associated with
them are stored in materials. The same shader can be safely
shared between many different materials. On the other hand,
changing the property of a material will affect the look of all the
How it works…
Every Unity shader has a built-in structure that it is looking for in its code. The
block is one of those functions that are expected by Unity. The reason behind this is to give
you, the shader programmer, a means of quickly creating GUI elements that tie directly into
your shader code. These properties that you declare in the
be used in your shader code to change values, colors, and textures. The syntax to
property is as follows:
Let's take a
look at what is going on under the hood here. When you �rst start writing a new
property, you will need to give it a
Variable Name
. The variable name is
going to be the name
that your shader code is going to use in order to get the value from the GUI element. This
saves us a lot of time because we don't have to set up this system ourselves.
The next elements of a
property are the
Inspector GUI Name
property, which
Inspector GUI Name
is the name that is going to appear
in the material's
tab when the user is interacting with and tweaking the shader. The
is the type of data that this property is going to control. There are many types that we
can de�ne for properties inside of Unity shaders. The following table describes the types of
variables that we can have in our shaders:
Surface Shader property types
This creates a float property as a slider from the minimum value to
the maximum value
This creates a color swatch in the
This creates a texture swatch that allows a user to drag a texture in
This creates a non-power-of-2 texture swatch and functions the
This creates a cube map swatch in the
tab and allows a
user to drag and drop a cube map in the shader
This creates a float value in the
This creates a four-float property that allows you to create directions
or colors
Finally, there is the
Default Value
. This simply sets the value of this property to the value that
you place
in the code. So, in the previous example image, the
default value for the property
type, is set to a value of
a color property expecting a color that is
this color property, when �rst created, is set to white.
The properties are
documented in the Unity manual at
Using properties in a Surface Shader
Now that
we have created some properties, let's actually hook
them up to the shader so that we
can use them as tweaks to our shader and make the material process much more interactive.
We can use the properties' values from the material's
tab because we have
attached a variable name to the property itself, but in the shader code, you have to set up a
couple of things before you can start calling the value by its variable name.
How to do it…
The following steps show you how to use the properties in a Surface Shader:
To begin, let's remove the following lines of code, as we deleted the property called
Creating a basic Standard Shader
recipe of this chapter:
Next, add the following lines of code to the shader, below the
With step 2 complete, we can now use the values from the properties in our shader.
Let's do this by adding the value from the
property to the
property and giving the result of this to the
line of code. So, let's add the
following code to the shader in the
Finally, your
shader should look like the
following shader code. If you save your
shader in MonoDevelop and re-enter Unity, your shader will compile. If there were no
errors, you will now have the ability to change the ambient and emissive colors of the
material as well as increase the saturation of the �nal color using the slider value.
Pretty neat, huh!
Downloading the example code
You can download the example code �les for all Packt
books you have purchased from your account at
. If you purchased
this book elsewhere, you can visit
and register to have the
�les e-mailed directly to you.
function is a built-in function that will perform
the equivalent math function of power. So, argument 1 is the value that
we want to raise to a power and argument 2 is the power that we want to
raise it to.
To find out more about the
function, look at the Cg tutorial. It
is a great free resource that you can use to learn more about shading
and get a glossary of all the functions
available to you in the Cg shading
The following
screenshot demonstrates the result obtained
using our properties to control our
material's colors and saturation from within the material's
How it works…
When you declare a new property in the
block, you are providing a way for the
shader to retrieve the tweaked value from the material's
tab. This value is stored
in the variable name portion of the property. In this case,
are the variables in which we are storing the tweaked values. In
for you to be able to use the value in the
block, you need to create three new
variables with the same names as the property's variable name. This automatically sets up a
link between these two so that they know they have to work with the same data. Additionally, it
declares the type of data that we want to store in our subshader variables, which will come in
handy when we look at optimizing shaders in a later chapter.
Once you have created the subshader variables, you can then use the values in the
function. In this case, we want to add the
variables together
and take it to a power of whatever the
variable is equal to in the material's
The vast majority of shaders start out as Standard Shaders and get modi�ed until they match
the desired look. We have now created the foundation for any Surface Shader you will create
that requires a diffuse component.
Materials are
There's more…
Like any other programming language, Cg does not allow mistakes. As such, your shader will
not work if
you have a typo in your code. When this
happens, your materials are rendered in
When a script does not compile, Unity prevents your game from being exported or even
executed. Conversely, errors in shaders do not stop your game from being executed.
If one of your shaders appears as magenta, it is time to investigate where the problem is. If
you select the incriminated shader, you will see a list of errors displayed in its
Despite showing the line that raised the error, it rarely means that this is the line that has
to be �xed. The error message shown in the previous image is generated by deleting the
variable from the
block. However, the error is raised
by the �rst line that tries to access such a variable.
Finding and �xing what's wrong
with code is a process called
mistakes that you should check for are as follows:
missing bracket. If you forgot to add a curly bracket to close a section, the compiler
is likely to raise errors at the end of the document, at the beginning, or a new section.
A missing semicolon. One of the most common mistakes but luckily one of the
easiest to spot and �x. Errors are often raised by the following line.
A property that has been de�ned in the
section but has not been
coupled with a variable in the
Conversely to what you might be used to in C# scripts, �oating point values in Cg do
not need to the followed by an
, not
The error messages raised by shaders can be very misleading, especially due to their strict
syntactic constraints. If you are in doubt about their meaning, it's best to search the Internet.
Unity forums are �lled with other developers who are likely to have encountered (and �xed)
your problem before.
More information
on how to master Surface Shaders and their
properties can be found in
Chapter 2
Surface Shaders and Texture Mapping
. If you are curious to see what shaders can
actually do when used at their full potential, have a look at
Chapter 10
Advanced Shading
, for some of the most advanced techniques covered in this book.
Surface Shaders and
Texture Mapping
In this chapter, we will explore Surface Shaders. We will start from a very simple matte
material and end with holographic projections and advanced terrains blending. We can also
use textures to animate, blend, and drive any other property that we want. In this chapter, you
will learn about the following methods:
Diffuse shading
Using packed arrays
Adding a texture to a shader
Scrolling textures by modifying UV values
Creating a transparent material
Packing and blending textures
Creating a circle around your terrain
Surface Shaders have been introduced in
Chapter 1
Creating Your First Shader
type of shader used in Unity. This chapter will show in detail what these actually are and how
they work. Generally speaking, there are two essential steps in every Surface Shader. First,
you have to specify certain physical properties of the material that you want to describe,
such as its diffuse color, smoothness, and transparency. These properties are initialized in a
surface function
and stored in a structure called
. Secondly,
the surface output is passed to a
. This is a special function that will also take
information about the nearby lights in the scene. Both these parameters are then used to
calculate the �nal color for each pixel of your model. The lighting function is where the real
calculations of a shader take place as it's the piece of code that determines how light should
behave when it touches a material.
The following diagram loosely
summarizes how a Surface Shader works. Custom lighting
models will be explored in
Chapter 3
Understanding Lighting Models
Chapter 5
, will focus on vertex modi�ers:
Diffuse shading
Before starting
our journey into texture mapping, it is important to understand how diffuse
materials work. Certain objects might have a uniform color and smooth surface, but not smooth
enough to shine on re�ected light. These matte materials are best represented with a Diffuse
shader. While in the real world, pure diffuse materials do not exist; Diffuse shaders are relatively
cheap to implement and �nd a large application in games with low-poly aesthetics.
As this shader has been re�tted from a Standard Shader, it will use physically-based
rendering to simulate how light behaves on your models. If you are trying to achieve a
non-photorealistic look, you can change the �rst
directive so that it uses
. If you do so, you should also replace
How it works...
The way shaders allow you to communicate the rendering properties of your material to their
via a surface output. It is basically a wrapper around all the parameters that
the current lighting model needs. It should not surprise you that different lighting models have
different surface output structs. The following table shows the three main output structs used
in Unity 5 and how they can be used:
Type of shaders
Unity 4
Unity 5
Any Surface Shader
Any Surface Shader
struct has the following properties:
: This is the base color of the material (whether it's diffuse
: This property is declared as
, while it was de�ned as
is the occlusion (default
: This is the smoothness (
= rough,
= smooth)
How to do it...
There are two types of variables in Cg: single values and packed arrays. The latter can be
suggest, these types of variables are similar to structs, which means that they each contain
several single values. Cg calls them
packed arrays
, though they are not exactly
The elements of a packed array can be accessed as a normal struct. They are typically called
. However, Cg also provides you with another alias for them, that is,
. Despite there being no difference between using
, it can make a huge difference for
the readers. Shader coding, in fact, often involves calculation with positions and colors. You
might have seen this in the Standard Shaders:
was a struct and
was a packed array. This is also why Cg prohibits the mixed
usage of these two syntaxes: you cannot use
There is also another
important feature of packed arrays that has no equivalent in C#:
. Cg allows addressing and reordering elements within packed arrays in just a single
line. Once again, this appears in the Standard Shader:
, which means that it contains three values of the
type. However,
. A direct assignment would result in a compiler error as
. The C# way of doing this would be as follows:
However, it can be compressed in Cg:
Cg also allows reordering elements, for instance, using
to swap the red and
Lastly, when a single value is assigned to a packed array, it is copied to all of its �elds:
This is referred
to as
Swizzling can also be used on the left-hand side of an expression, allowing only certain
components of a packed array to be overwritten:
Packed matrices
really shows its full potential is when applied to packed matrices. Cg allows
, which represents a matrix of �oats with four rows and four columns.
You can access a single element of the matrix using the
notation, where
is the row
notation can also be chained:
An entire row can be selected using squared brackets:
// Equivalent tofloat4 firstRow = matrix._m00_m01_m02_m03;See alsoPacked arrays are one
of the nicest features of Cg. You can discover more
Adding a texture to a shader
Textures can
bring our shaders to life very quickly in terms of achieving
very realistic effects.
In order to effectively use textures, we need to understand how a 2D image is mapped to a 3D
model. This process is called
texture mapping
, and it requires some work to be done on the
shader and 3D model that we want to use. Models, in fact, are made out of triangles; each
vertex can store data that shaders can access. One of the most important information stored
in vertices is the
UV data
. It consists of two coordinates,
, ranging from 0 to 1. They
represent the
position of the pixel in the 2D image that will be mapped to the vertices. UV
data is present only for vertices; when the inner points of a triangle have to be texture-mapped,
the GPU interpolates the closest UV values to �nd the right pixel in the texture to be used. The
following image shows you how a 2D texture is mapped to a triangle from a 3D model:
The UV data is stored in the 3D model and requires a modeling software to be edited. Some
models lack the UV component, hence they cannot support texture mapping. The Stanford
bunny, for example, was not originally provided with one.
How to do it...
Adding a texture to your model using the Standard Shader is incredibly simple, as follows:
Create a new Standard Shader called
Create a new material called
Assign the shader to the material by dragging over it.
After selecting the material, drag your texture to the empty rectangle called
. If you have followed all these steps correctly, your material
should look like this:
The Standard Shader knows how to map a 2D image to a 3D model using its UV data.
How it works…
When the Standard
Shader is used from the inspector of a material, the
process behind texture
mapping is completely transparent to developers. If we want to understand how it works, it's
necessary to take a closer look at
. From the
section, we can
see that the
texture is actually referred to in the code as
section, this texture is de�ned as
, the standard type for
2D textures:
The next line shows a struct called
. This is the input parameter for the surface function
and contains a packed array called
Every time the
surface function is called, the
for the speci�c point of the 3D model that needs to be rendered. The Standard
refers to
automatically. If you are interested in understanding how the UV is actually mapped from a 3D
space to a 2D texture, you can check
Chapter 3
Understanding Lighting Models
Finally, the UV data is used to sample the texture in the �rst line of the surface function:
function of Cg; it takes a texture and UV and returns the color
of the pixel at that position.
coordinates go from 0 to 1, where (0,0) and (1,1)
correspond to two opposite corners. Different implementations
associate UV with different corners; if your texture happens to
appear reversed, try inverting the V component.
There's more...
When you import a
texture to Unity, you are setting up some of the properties
will use. The most important is the
mode, which determines how colors are interpolated
when the texture is sampled. It is very unlikely that the UV data will point exactly to the
center of a pixel; in all the other cases, you might want to interpolate between the closest
pixels to get a more uniform color. The following is the screenshot of the
example texture:
For most applications,
provides an inexpensive yet effective way to smooth the
texture. If you are
creating a 2D game, however,
produce blurred tiles. In this
case, you can use
to remove any interpolation from the texture sampling.
When a texture is seen from a steep angle, texture sampling is likely to produce visually
unpleasant artifacts. You can reduce them by setting
Aniso Level
to a higher value. This is
particular useful for �oor and ceiling textures, where glitches can break the illusion of continuity.
If you would like to
know more about the inner working of how
textures are mapped to a 3D
surface, you can read the
information available at
For a complete list of the options
available when importing a 2D texture, you can refer to the
following website:
Scrolling textures by modifying UV values
the most common texture techniques
used in today's game industry is the process
of allowing you to scroll the textures over the surface of an object. This allows you to create
effects such as waterfalls, rivers, lava �ows, and so on. It's also a technique that is the basis
to create animated sprite effects, but we will cover this in a subsequent recipe of this chapter.
Let's �rst see how we will create a simple scrolling effect in a Surface Shader.
// Create a separate variable to store our UVs
How it works…
The scrolling system starts with the declaration of a couple of properties, which will allow the
user of this shader to increase or decrease the speed of the scrolling effect itself. At their
core, they are �oat values being passed from the material's
tab to the surface
function of the shader. For more information on shader properties, see
Chapter 1
Your First Shader
Once we have these �oat values from the material's
tab, we can use them to offset
our UV values in the shader.
To begin this process, we �rst store the UVs in a separate variable called
variable has to be
because the UV values are being passed to us from the
Once we have access to the mesh's UVs, we can offset them using our scroll speed variables
variable. This built-in variable returns a variable of the
meaning that each component of this variable contains different values of time as it pertains
to game time.
A complete
values are described at the following link:
variable will give us an incremented �oat value based on Unity's game time clock.
So, we can use this value to move our UVs in a UV direction and scale that time with our scroll
speed variables:
With the correct offset being calculated by time, we can add the new offset value back to the
original UV position. This is why we are using the
operator in the next line. We want to take
the original UV position, add the new offset value, and then pass this to the
as the texture's new UVs. This creates the effect of the texture moving on the surface. We are
really manipulating the UVs, so we are faking the effect of the texture moving:
Normal mapping
Every triangle of a
toward. It is often represented with an arrow placed in the center of the triangle and
orthogonal to the surface. The facing direction plays an important role in the way light re�ects
on a surface. If two adjacent triangles face different directions, they will re�ect lights at
different angles, hence they'll be shaded differently. For curved objects, this is a problem: it is
obvious that the geometry is made out of �at triangles.
To avoid this problem, the way the light re�ects on a triangle doesn't take into account its facing
direction, but its
normal direction
instead. As stated in
Adding a texture to a shader
vertices can store data; the normal direction is the most used information after the UV data. This
is a vector of unit length that indicates the direction faced by the vertex. Regardless of the facing
direction, every point within a triangle has its own normal direction that is a linear interpolation
of the ones stored in its vertices. This gives us the ability to fake the effect of high-resolution
geometry on a low-resolution model. The following image shows the same geometric shape
rendered with different per-vertex normals. In the image on the left, normals are orthogonal to
the face represented by its vertices; this indicates that there is a clear separation between each
face. On the right, normals are interpolated along the surface, indicating that even if the surface
is rough, light should re�ect as if it's smooth. It's easy to see that even if the three objects in the
following image share the same geometry, they re�ect light differently. Despite being made out
of �at triangles, the object on the right re�ects light as if its surface was actually curved:
Smooth objects with rough edges are a clear indication that per-vertex normals have been
interpolated. This can be seen if we draw the direction of the normal stored in every vertex, as
shown in the following image. You should note that every triangle has only three normals, but
as multiple triangles can share the same vertex, more than one line can come out of it:
Calculating the normals from the 3D model is a technique that has rapidly declined in favor of
a more advanced one—normal mapping. Similar to what happens with texture mapping, the
normal directions can be provided using an additional texture, usually called normal map or
bump map. Normal maps are usually RGB images, where the RGB components are used to
indicate the X, Y, and Z components of the normal direction. There are many ways to create
normal maps these days. Some applications
such as
) and
NDO Painter
) will take in 2D data and convert it to
normal data for you. Other applications such as
Zbrush 4R7
) and
) will take 3D sculpted data and create
normal maps for you. The actual process of creating normal maps is de�nitely out of the scope
of this book, but the links in the previous text should help you get started.
Unity makes the process of adding normals to your shaders quite an easy process in the
Surface Shader realm using the
function. Let's see how this is done.
How to do it…
The following are the
steps to create a normal map shader:
Let's get the
block set up in order to have a color tint and texture:
By initializing the texture as
, we are telling Unity that
will contain a normal map. If the texture is
not set, it will be replaced by a grey texture. The color used
indicates no bump at all.
Link the properties to the Cg program by declaring them in
below the
We need to make sure that we update the
struct with the proper variable
name so that we can use the model's UVs for the normal map texture:
The following image demonstrates the result of our normal map shader:
Shaders can have both a texture map and normal map. It is not
uncommon to use the same UV data to address both. However, it is
possible to provide a secondary set of UVs in the vertex data (UV2)
speci�cally used for the normal map.
How it works…
The actual math to
perform the normal mapping effect is de�nitely beyond the scope of this
chapter, but Unity has done it all for us already. It has created the functions for us so that we
don't have to keep doing it over and over again. This is another reason why Surface Shaders
are a really ef�cient way to write shaders.
If you look in the
�le found in the
folder in your Unity installation
directory, you will �nd the de�nitions for the
function. When you declare
this function in your Surface Shader, Unity takes the provided normal map and processes it
for you and gives you the correct type of data so that you can use it in your per-pixel lighting
function. It's a huge time-saver! When sampling a texture, you get RGB values from 0 to 1;
however, the directions of a normal vector range from -1 to +1.
these components in the right range.
Once you have processed the normal map with the
function, you send it
back to your
struct so that it can be used in the lighting function. This is
done by
. We will see how the normal is actually used to
calculate the �nal color of each pixel in
Chapter 3
Understanding Lighting Models
There's more…
You can also add some controls to your normal map shader that lets a user adjust the
intensity of the normal map. This is easily done by modifying the
components of
the normal map variable and then adding it all back together. Add another property to the
components of the unpacked normal map and reapply this value to the
normal map variable:
Normal vectors are supposed to have lengths equal to one.
Multiplying them for
length, making normalization necessary.
Now, you can let a user adjust the intensity of the normal map in the material's
The following image
shows the result of modifying the normal map with our scalar values:
Creating a transparent material
shaders seen so far have something in common—they are used for solid materials.
If you want to improve the look of your game, transparent materials are often a good way to
start. They can be used for anything from a �re effect to a window glass. Working with them,
unfortunately, is slightly more complicated. Before rendering solid models, Unity orders them
according to the distance from the camera (
facing away from the camera (
). When rendering transparent geometries, there are
two aspects can cause problems. This recipe will show you how to
solve some of these issues when it comes to creating a transparent Surface Shader. This topic
will be heavily revisited in
Chapter 6
Fragment Shaders and Grab Passes
glass and water shaders will be provided.
How to do it…
As mentioned previously, there are a few aspects that we need to take care of while using a
Transparent Shader:
section of the shader, add the following tags that signal the
As this shader is designed for 2D materials, make sure that the back geometry of
your model is not drawn by adding the following:
Tell the shader that this material is transparent and needs to be blended with what
was drawn on the screen before:
Use this Surface Shader to determine the �nal color and transparency of the glass:
How it works…
This shader introduces several new concepts. First of all,
are used to add information
about how the object is going to be rendered. The really interesting one here is
. Unity,
by default, will sort your objects for you based on the distance from the camera. So, as an
object gets nearer to the camera, it is going to be drawn over all the objects that are further
away from the camera. For most cases, this works out just �ne for games, but you will �nd
certain situations where you will want to have more control over the sorting of your objects in
your scene. Unity has provided us with some default render queues, each with a unique value
that directs Unity when to draw the object to the screen. These built-in render queues are
Render queue
Render queue description
Render queue
This render queue is rendered first. It is used for skyboxes
The fact that the
queue is rendered after
tag makes this object unaffected by Unity's projectors. Lastly,
plays a role in
, a topic that will be covered brie�y in
Chapter 9
Gameplay and Screen Effects
The last concept introduced is
. This indicates that all the pixels from this
material have to be blended with what was on the screen before according to their alpha
values. Without
this directive, the pixels will be drawn in the correct order, but they won't have
any transparency.
Creating a Holographic Shader
and more space-themed games are being released every year. An important part of a
good sci-� game is the way futuristic technology is presented and integrated in the gameplay.
There's nothing that screams futuristic more than holograms. Despite being present in
many �avors, holograms are often represented as semi-transparent, thin projections of an
object. This recipe shows you how to create a shader that simulates such effects. Take this
as a starting point: you can add noise, animated scanlines, and vibrations to create a truly
outstanding holographic effect. The following image shows an example of a holographic effect:
According to the type of object that you will use, you might want its
backside to appear. If this is the case, add
back of the model won't be removed (
This shader is not trying to simulate a realistic material, so there is no need to use
Lambertian re�ectance
, which is very cheap, is used
instead. Additionally, we should disable any lighting with
and signal to
Cg that this is a Transparent Shader using
structure so that Unity will �ll it with the current view direction and
world normal direction:
Use the following surface function. Remember that as this shader is using
the Lambertian re�ectance as its lighting function, the name of the surface
structure should be changed accordingly to
instead of
You can now use the
Rim effect
slider to choose the strength of the holographic effect.
How it works…
As mentioned before, this shader works by showing only the silhouette of an object. If we look
at the object from another angle, its outline will change. Geometrically speaking, the edges
is orthogonal (90 degrees) to the
view direction
structure declares these parameters,
, respectively.
The problem of understanding when two vectors are orthogonal can be solved using the
operator that takes two vectors and returns zero if they are orthogonal. We
to determine how close to zero the dot product has to be for the triangle to
fade completely.
The second aspect that is used in this shader is the gentle fading between the edge of
the model (fully visible) and the angle determined by
(invisible). This linear
interpolation is done as follows:
Finally, the original alpha from the texture is multiplied with the newly calculated coef�cient to
achieve the �nal look.
There's more…
This technique is very simple and relatively inexpensive. Yet, it can be used for a large variety
of effects, such as the following:
The slightly colored atmosphere of a planet in sci-� games
The edge of an object that has been selected or is currently under the mouse
A ghost or specter
Smoke coming out of an engine
The shockwave of an explosion
The dot product
plays an important role in the way re�ections are calculated.
Chapter 3
Understanding Lighting Models
, will explain in detail how it works and why it is widely used in
so many shaders.
Packing and blending textures
Textures are
useful to store not only loads of data, not just pixel colors as we
generally tend
to think of them, but also for multiple sets of pixels in both the
directions and RGBA
channels. We can actually pack multiple images into one single RGBA texture and use each of
the R, G, B, and A components as individual textures themselves by extracting each of these
components in the shader code.
The result of packing individual grayscale images into a single RGBA texture can be seen in
the following image:
Why is this helpful? Well, in terms of the amount of actual memory that your application takes
up, textures are a large portion of your application's size. So, to begin reducing the size of your
application, we can look at all of the images that we are using in our shader and see if we can
merge these textures into a single texture.
Any texture that is grayscale can be packed into one of the RGBA channels of another texture.
This might sound a bit odd at �rst, but this recipe is going to demonstrate one of the uses of
packing a texture and using these packed textures in a shader.
One example of using these packed textures is when you want to blend a set of textures
together onto a single surface. You see this most often in terrain type shaders, where you
need to blend into another texture nicely using some sort of control texture or the packed
texture, in this case. This recipe covers this technique and shows you how you can construct
the beginnings of a nice four-texture blended terrain shader.
How to do it…
Let's learn how to use packed textures by entering the code shown in the following steps:
We need to add a few properties to our
block. We will need �ve
objects, or textures, and two color properties:
We then need to create the
section variables that will be our link to
So, now
we have our texture properties and we are passing
them to our
function. In order to allow the user to change the tiling rates on a per-
texture basis, we will need to modify our
struct. This will allow us to use the
tiling and offset parameters on each texture:
function, get the texture information and store them in their own
variables so that we can work with the data in a clean, easy-to-understand way:
Let's blend each of our textures together using the
function. It takes three
takes in two textures and blends them with the �oat value given in the last argument:
Finally, we
multiply our blended textures with the color tint
values and use the red
channel to determine where the two different terrain tint colors go:
The result of blending together four terrain textures and creating a terrain tinting technique
can be seen in the following image:
How it works…
This might seem like
quite a few lines of code, but the concept behind blending
quite simple. For the technique to work, we have to employ the built-in
function from
the CgFX standard library. This function allows us to pick a value between argument one and
argument two using argument three as the blend amount:
This involves linear interpolation:
are matching vector or scalar types. The
parameter can be
either a scalar or vector of the same type as
So, for example, if we wanted to �nd the mid-value between 1 and 2, we could feed the value
0.5 as the third argument to the
function and it would return the value 1.5. This
works perfectly for our blending needs as the values of an individual channel in an RGBA
texture are single �oat values, usually in the range of 0 to 1.
In the shader, we simply take one of the channels from our blend texture and use it to drive the
color that is picked in a
function for each pixel. For instance, we take our grass texture
and dirt texture, use the red channel from our blending texture, and feed this to a
function. This will give us the correct blended color result for each pixel on the surface.
function is shown
in the following image:
The shader code simply uses the four channels of the blend texture and all the color textures
to create
a �nal blended texture. This �nal texture then becomes our
color that we can
multiply with our diffuse lighting.
Creating a circle around your terrain
Many RTS
games display distances (range attack, moving
distance, sight, and so on) by
drawing a circle around the selected unit. If the terrain is �at, this can be done simply by
stretching a quad with the texture of a circle. If that's not the case, the quad will most likely be
clipped behind a hill or another piece of geometry. This recipe will show you how to create a
shader that allows you to draw circles around an object of arbitrary complexity. If you want to
be able to move or animate your circle, we will need both a shader and C# script. The following
image shows an example of drawing a circle in a hilly region using a shader:
Now that the texture is set, you have to change the material of the terrain so that a
custom shader can be provided. From
Moving the circle
If you want the
circle to follow your character, other steps are necessary:
Create a new C# script called
Add these properties to the script:
method, add these lines of code:
Understanding Lighting
In the previous chapters, we introduced Surface Shaders and explained how we can change
physical properties (such as Albedo and Specular) to simulate different materials. How
does this really work? At the heart of every Surface Shader, there is its
takes these properties and calculates the �nal shade of each pixel. Unity
usually hides this from the developers because in order to write a lighting model, you have
to understand how light re�ects and refracts onto surfaces. This chapter will �nally show you
how lighting models work and give you the basics to create your own.
In this chapter, you will learn the following recipes:
Creating a custom diffuse lighting model
Creating a Toon Shader
Creating an Anisotropic Specular type
Simulating the way light works is a very challenging and resource-consuming task. For many
years, video games have used very simple lighting models that, despite lacking realism, were
very believable. Even if most 3D engines are now using physically-based renderers, it is worth
exploring some simpler techniques. The ones presented in this chapter are reasonably realistic
and widely adopted on devices with low resources such as mobile phones. Understanding
these simple lighting models is also essential if you want to create your own one.
Creating a custom diffuse lighting model
If you
are familiar with Unity 4, you may know that the default shader it provided was
based on a lighting model called Lambertian re�ectance. This
recipe will show you how it is
possible to create a shader with a custom lighting model and explain the mathematics and
implementation behind it. The following image shows the same geometry rendered with a
Standard Shader (right) and diffuse Lambert one (left):
Shaders based on the Lambertian re�ectance are classi�ed as non-photorealistic; no object in
the real world really looks like this. However, Lambert Shaders are still often used in low poly
games as they produce a neat contrast between the faces of complex geometries. The lighting
model used to calculate the Lambertian re�ectance is also very ef�cient, making it perfect for
Unity has already provided us with a lighting function that we can use for our shaders. It
is called the Lambertian lighting model. It is one of the more basic and ef�cient forms of
re�ectance, which you can �nd in a lot of games even today. As it is already built in the Unity
Surface Shader language, we thought it is best to start with this �rst and build on it. You can
also �nd an example in the Unity reference manual, but we will go into more depth with it and
explain where the data is coming from and why it is working the way it is. This will help you get
a nice grounding in setting up custom lighting models so that we can build on this knowledge
in the future recipes in this chapter.
How to do it…
The Lambertian re�ectance
can be achieved with the following changes to the shader:
Begin by adding the following properties to the shader's
directive of the shader so that, instead of
our custom lighting model:
Use a very simple surface function, which just samples the texture according to
Add a function called
that will contain the following
code for the Lambertian re�ectance:
According to the
Lambertian re�ectance, the amount of light a
surface re�ects depends on
the angle between the incident light and surface normal. If you have played pool billiards,
you are surely familiar with this concept; the direction of a ball depends on its incident angle
against the wall. If you hit a wall at a 90 degree angle, the ball will come back at you; if you
hit it with a very low angle, its direction will be mostly unchanged. The Lambertian model
makes the same assumption; if the light hits a triangle with a 90 degree angle, all the light
gets re�ected back. The lower the angle, the less light is re�ected back to you. This concept is
shown in the following image:
This simple concept has to be translated into a mathematical form. In vector algebra, the
angle between two unit
vectors can be calculated via an operator called
the dot product is equal to zero, two vectors are orthogonal, which means that they make a 90
degree angle. When it is equal to one (or minus one), they are parallel to each other. Cg has a
, which implements the dot product extremely ef�ciently.
The following picture shows a light source (sun) shining on a complex surface.
indicates the
is the normal to the surface. The
re�ected with the same angle that it hits the surface:
The Lambertian
re�ectance simply uses the
dot product as a multiplicative coef�cient
for the intensity of light:

are parallel, all the light is re�ected back to the source, causing the
geometry to appear brighter. The
variable contains the color of the
light that is calculated.
Prior to Unity 5, the intensity of the lights were different. If you are
using an old Diffuse shader based on the Lambertian model, you
may notice that
was multiplied by two:
. If you are importing a
custom shader from Unity 4, you will need to correct this manually.
Legacy Shaders, however, have already been designed taking this
aspect into account.
When the dot product is negative, the light is coming from the opposite side of the triangle.
This is not a problem for opaque geometries as triangles that are not facing the camera
frontally are
(discarded) and not rendered.
This basic Lambert is a great starting point when you are prototyping your shaders as you can
get a lot accomplished in terms of writing the core functionality of the shader while not having
to worry about the basic lighting functions.
Unity has provided us with a lighting model that has already taken the task of creating
a Lambert lighting for you. If you look at the
�le found in your Unity's
installation directory under the
folder, you will notice that you have Lambert and
lighting models available for you to use. The moment
you compile your shader
, you are telling the shader to utilize Unity's
implementation of the Lambert lighting function in the
�le so that we don't
have to write that code over and over again. We will explore how the BlinnPhong model works
later in this chapter.
Creating a Toon Shader
effects in games is the
toon shading
, which is also known as
(short for
). It is a non-photorealistic rendering technique that makes 3D
models appear �at. Many games use it to give the illusion that
hand-drawn rather than being 3D-modeled. You can see, in the following picture, a sphere
rendered with a Standard Shader (right) and Toon Shader (left):
Achieving this effect using just surface functions is not impossible, but it would be extremely
expensive and time-consuming. The surface function, in fact, only works on the properties of
the material, not its actual lighting condition. As toon shading requires to change the way light
re�ects, we need to create our custom lighting model instead.
How to do it…
The toon aesthetic can be achieved with the following changes to the shader:
Add a new property for a texture called
Add its relative variable in the
directive so that it points to a function called
There's more…
There are many different ways one can achieve a toon shading effect. Using different ramps
can produce dramatic
changes in the way your models look, so you should experiment in
order to �nd the best one.
An alternative to ramp textures is to
the light intensity
a certain number of values equidistantly sampled from 0 to 1:
float spec = pow(max(0, dot(reflectionVector, viewDir)),
float3 finalSpec = _SpecularColor.rgb * spec;
// Final effect
fixed4 c;
atten) + (_LightColor0.rgb * finalSpec);
c.a = s.Alpha;
How it works…
Let's break down the lighting function by itself, as the rest of the shader should be pretty
familiar to you at this point.
In the previous recipes, we
have used a lighting function that provided only the light direction,
. Unity comes with a set of lighting functions that you can use, including one that
provides the view direction,
. Refer to the
following table or go to
Not view-
In our case, we are doing a Specular shader, so we need to have the view-dependent lighting
function structure. So, we have to write the following:
This will tell the shader that we want to create our own view-dependent shader. Always make
sure that your lighting function name is the same in your lighting function declaration and the
statement, or Unity will not be able to �nd your lighting model.
The components that play a role in the Phong model are described in the following image. We
have the light
(coupled with its perfect re�ection
. They
have all been encountered before in the Lambertian model, with the exception of
view direction
The Phong model assumes that the �nal light intensity of a re�ective surface is given by two
components: its diffuse color and Specular value, as follows:
The diffuse component
remains unchanged from the Lambertian model:

The Specular component
is de�ned as follows:


power de�ned as
in the shader. The only unknown
parameter is
according to
. In vector algebra, this can be
calculated as follows:


This is exactly what is calculated in the following:
This has the effect of bending the normal towards the light; as a vertex normal is pointing
away from the light, it is forced to look at the light. Refer to the following screenshot for a more
visual representation. The script that produces this debug effect is included in the book's
support page at
The following
screenshot displays the �nal result of our Phong Specular calculation isolated in
Creating a BlinnPhong Specular type
is another
more ef�cient way of calculating and
estimating specularity. It is done by
getting the half vector from the view direction and light direction. It was brought into the world
of Cg by Jim Blinn. He found that it was much more ef�cient to just get the half vector instead
of calculating our own re�ection vectors. It cut down on the code and processing time. If you
you will notice that it is using the half vector as well, hence it is named
simpler version of the full Phong calculation.
(_LightColor0.rgb * _SpecularColor.rgb * spec) * atten;
c.a = s.Alpha;
How it works…
BlinnPhong Specular is almost exactly like the Phong Specular, except that it is more
ef�cient because it uses less code to achieve almost the same effect. Before the introduction
of physically-based rendering, this approach was the default choice for Specular re�ection in
Unity 4.
Calculating the re�ection vector
is generally expensive. The BlinnPhong Specular replaces it
with the half vector
between the view direction
Instead of calculating our own re�ection vector, we are simply going to get the vector halfway
between the view direction and light direction, basically simulating the re�ection vector. It has
actually been found that this approach is more physically accurate than the last approach, but
we thought it necessary to show you all the possibilities:


According to vector algebra, the half vector can be calculated as follows:
is the length of the vector
. In Cg, we simply need to add the view direction
and light direction together and then normalize the result to a unity vector:
Then, we simply need to dot the vertex normal with this new half vector to get our main
Specular value. After this, we just take it to a power of
and multiply it by the
Specular color variable. It's much lighter on the code and math, but still gives us a nice
Specular highlight that will work for a lot of real-time situations.
seen in this chapter are extremely simple; no real material is perfectly matte
or perfectly specular. Moreover, it is not uncommon for complex materials such as clothing,
wood, and skin to require knowledge of how light scatters in the layers beneath the surface.
Use the following table to recap the different lighting models encountered so far:
Unity 5 shader
Light Intensity (I)
Legacy Shaders




Legacy Shaders


There are other
interesting models such as the Oren-Nayar lighting model for rough surfaces:
Creating an Anisotropic Specular type
type of Specular or re�ection that simulates the directionality of grooves in a
surface and modi�es/stretches the Specular in the perpendicular direction. It is very useful
when you want to simulate brushed metals, not a metal with a clear, smooth, and polished
surface. Imagine the Specular that you see when you look at the data side of a CD or DVD or
the way Specular is shaped at the bottom of a pot or pan. You will notice that if you carefully
examine the surface, you will see that there is a direction to the grooves in the surface,
usually in the way the metal was brushed. When you apply a Specular to this surface, you get
a Specular stretched in the perpendicular direction.
This recipe will introduce you to the concept of augmenting your Specular highlights to achieve
different types of brushed surfaces. In future recipes, we will look at ways in which we can
use the concepts of this recipe to achieve other effects such as stretched re�ections and hair,
but here, you are going to learn the fundamentals of the technique �rst. We
shader as a reference for our own custom Anisotropic Shader:
The following screenshot shows examples of different types of Specular effects one can
achieve using Anisotropic Shaders in Unity:
How to do it…
To create an
Anisotropic effect, we need to make the following changes to the shader
previously created:
We �rst need to add the properties that we are going to need for our shader. These
will allow a lot of artistic control over the �nal appearance of the surface:
fixed4 c;
c.rgb = ((s.Albedo * _LightColor0.rgb * NdotL) +
(_LightColor0.rgb * _SpecularColor.rgb * spec)) *
c.a = s.Alpha;
The Anisotropic normal map allows us to give the surface direction and helps us disperse the
Specular highlight around the surface. The following screenshot demonstrates the result of
our Anisotropic Shader:
How it works…
Let's break down this shader into its core components and explain why we are getting the
effect. We will mostly be covering the custom lighting function here, as the rest of the shader
should be pretty self-explanatory at this point.
We �rst start by
declaring our own
struct. We need to do this in order
to get the per-pixel information from the Anisotropic normal map, and the only way we can do
this in a Surface Shader is to use a
function. The following
code shows the custom surface output structure used in our shader:
We can use the
struct as a way of interacting between the lighting
function and surface function. In our case, we are storing the per-pixel texture information
in the variable called
function and then passing this data to the
struct by storing it in the
variable. Once we have
this, we can use the per-pixel information in the lighting function using
With this data connection set up, we can move on to our actual lighting calculations. This
begins by getting the usual out of the way, the half vector, so that we don't have to do the
full re�ection calculation and diffuse lighting, which is the vertex normal dotted with the light
vector or direction. This is done in Cg with the following lines:
Then, we start the actual modi�cation to the Specular to get the right look. We �rst dot the
normalized sum of the vertex normal and per-pixel vectors from our Anisotropic normal
calculated in the previous step. This gives us a �oat value that
gives a value of
as the surface normal, which is modi�ed by the Anisotropic normal map
as it is perpendicular. Finally, we modify
this value with a
function so that we can basically get a darker middle highlight and
ultimately a ring effect based off of
. All the previously mentioned operations are
summarized in the following two lines of Cg code:
Finally, we scale the effect of the
value by taking it to a power of
globally decrease its strength by multiplying it by
This effect is great to create more advanced metal type surfaces, especially the ones that are
brushed and seem to have directionality to them. It also works well for hair or any sort of soft
surface with directionality to it. The following screenshot shows the result of displaying the
�nal Anisotropic lighting calculation:
Physically Based
Rendering in Unity 5
One of the biggest changes introduced in Unity 5 is
physically-based rendering
known as
. Previous chapters have repeatedly mentioned it without revealing too much
about it. If you want to understand not only how PBR works, but how to make the most out of
it, this is the chapter you should read.
In this chapter, you will learn the following recipes:
Understanding the metallic setup
Adding transparency to PBR
Creating mirrors and re�ective surfaces
Baking lights in your scene
All the lighting models encountered in
Chapter 3
Understanding Lighting Models
, were
very primitive descriptions of how light behaves. The most important aspect during their
making was
. Real-time
shading is expensive, and techniques such as Lambertian
or BlinnPhong are a compromise between computational cost and
realism. Having a more
) has allowed us to write progressively more
sophisticated lighting models and rendering engines, with the aim of simulating how light
actually behaves. This is, in a nutshell, the philosophy behind PBR. As the name suggests, it
tries to get as close as possible to the physics behind the processes that give a unique look to
each material. Despite this, the term PBR has been widely used in marketing campaigns and
is more of a synonym for
state-of-the-art rendering
rather than a well-de�ned technique. Unity
5 implements PBR by introducing two important changes. The �rst is a completely new lighting
). Surface
Shaders allow developers to specify the physical properties
of a material, but they do not impose actual physical constraints on them. PBR �lls this
gap using a lighting model that enforces principles of physics such as
energy conservation
(an object cannot re�ect more light than the amount it receives),
microsurface scattering
(rough surfaces re�ect light more erratically compared to smooth ones),
Fresnel re�ectance
surface occlusion
(the darkening of
corners and other geometries that are hard to light). All these aspects, and many others, are
used to calculate the Standard lighting model. The second aspect that makes PBR so realistic
is the simulation of physically-based light transport.
It means that objects are not drawn in the scene as if they were separate entities. They all
contribute to the �nal rendering as light can re�ect on them before hitting something else.
This aspect is not captured in the shaders themselves but is an essential part of how the
rendering engine works. Unfortunately, accurately simulating how light rays actually bounce
over surfaces in real time is beyond the capabilities of modern GPUs. Unity 5 makes some
clever optimizations that allow retaining visual �delity without sacri�cing performance. Some
of the most advanced techniques (such as re�ections), however, require the user input. All of
these aspects will be covered in this chapter. It is important to remember that PBR and GI do
not automatically guarantee that your game will be photorealistic. Achieving photorealism is a
very challenging task and, like every art, it requires great expertize and exceptional skills.
In the Standard Shader, purely metallic materials have dark diffuse components and the
color of their specular re�ections is determined by the
map. Conversely, the diffuse
component of purely non-metallic materials is determined by the
their specular highlights is determined by the color of the incoming light. Following these
principles allows the metallic work�ow to combine the albedo and specular into the
map, enforcing physically-accurate behaviors. This also allows saving more space, resulting in
a signi�cant speed up at the expenses of reduced control over the look of your materials.
For more information about the metallic setup, you can refer to these links:
Calibration chart
: How
to calibrate a metallic material (
In order to have a transparent Standard material, changing the
color property is not enough. Unless
you properly set its
Rendering Mode
, your material will not
rendering mode is perfect for windows, bottles, gems, and headsets.
You should notice that many transparent materials don't usually
project shadows. On top of this, the
Fading objects
Sometimes, you
want an object to fully disappear with a fading effect. In this case, specular
re�ections and Fresnel refraction and re�ection should disappear as well. When a fading
object is fully transparent, it should also be invisible. To do this, perform the following steps:
From the material's
tab, set
Rendering Mode
As before, use the alpha channel of the
color or map to determine the �nal
The following picture shows fading spheres. It is clear from the picture that the PBR effects
fade with the sphere as well. As you can see in the following image, the last one on the right is
almost invisible:
This rendering mode works best for non-realistic objects, such as holograms, laser rays, faux
lights, ghosts, and particle effects.
It's worth noticing that
does not allow the back of the geometry to be seen. In the
previous example, you
could not see the inner volume of the sphere. If you require such an
effect, you need to create your own shader and make sure that the back geometry is not culled.
examples in these recipe have been created using the Unity 5
, which is freely available in the
If your probe is used for a real mirror, you should check the
Box Projection
�ag. If it is used for
other re�ective surfaces, such as shiny pieces of metal or glass tables, you can uncheck it.
How it works…
When a shader wants information about its surroundings, it is usually provided in a structure
. They have been brie�y mentioned in
Chapter 1
Creating Your First Shader
as one of the shader property types, among
speaking, cube maps are the 3D equivalent of 2D textures; they represent a 360-degree
view of the world, as seen from a center point. Unity 5 previews cube maps with a spherical
projection, as seen in the following picture:
a camera, they are referred to as
as they are
used to provide a way to re�ect the sky. They can be used to re�ect geometries that are not in
the actual scene, such as nebulae, clouds, stars, and so on.
The reason why they are called cube maps is because of the way they are created: a cube
map is made up of six different textures, each one attached to the face of a cube. You can
create a cube map manually or
delegate it to a
. You can imagine a re�ection
probe as a collection of six cameras, creating a 360 mapping of the surrounding area. This
also gives you an idea why probes are so expensive. By creating one in our scene, we allow
Unity to know which objects are around the mirror. If you need more re�ective surfaces, you
can add multiple probes. You need no further action for the re�ection probes to work. The
Standard Shaders will use them automatically.
You should notice that when they are set to
, they render their cube map at the
beginning of every frame. There is a trick to make this faster; if you know that part of the
geometry that you want to re�ect does not move, you can bake the re�ection. This means
that Unity can calculate the re�ection before starting the game, allowing more precise (and
computationally expensive) calculations. In order to do this, your re�ection probe must be set
and will work only for objects that are �agged as
. Static objects cannot move
or change, which makes them perfect for terrains, buildings, and props. Every time a static
object is moved, Unity will regenerate the cube maps for its baked re�ection probes. This
might take a few minutes to several hours.
You can mix
probes to increase the realism of your game. Baked probes
will provide very high-quality re�ections environmental re�ections, while the real-time ones
can be used to move objects such as cars or mirrors. The next
Baking lights in your scene
recipe will explain in detail how light baking works.
If you are interested in learning more about re�ection probes, you should check these links:
Unity 5 manual
about Re�ection Probe:
Baking lights in your scene
lighting is a very expensive process. Even with state-of-the-art GPUs, accurately
light transport
(which is how light bounces between
surfaces) can take hours.
In order to make this process feasible for games, real-time rendering is essential. Modern
engines compromise between realism and ef�ciency; most of the computation is done
beforehand in a process called
recipe will explain how light baking works
and how you
can get the most out of it.
How to do it…
Light baking requires some manual con�guration. There are three essential, yet independent,
steps that you need to take.
Con�guring the static geometry
These steps
must be followed for the con�guration:
objects in your scene that do not change position, size, and material.
Possible candidates are buildings, walls, terrains, props, trees, and others.
box from the
tab, as shown
in the following image. If any of the selected objects has children, Unity will ask if
you want them to be considered static as well. If they meet the requirements (�xed
position, size, and material), select
Yes, change children
in the pop-up box:
light quali�es as a static object but illuminates
non-static geometry, make sure that
property is set to
. If it will affect only static objects, set it to
Con�guring the light probes
objects in your game that will move, such as the main
character, enemies, and the
non-playable characters
). If they enter a static region that is illuminated, you
might want to surround it with light probes. To do this, follow the given steps:
From the menu, navigate to
. A new object
Once selected, four interconnected spheres will appear. Click and move them around
the scene so that they enclose the static region in which your characters can enter.
The following picture shows an example of how light probes can be used to enclose
the volume of a static of�ce space:
moving objects that will enter the light probe region.
From their
, expand their
Mesh Renderer
and make sure that
is checked (see the
following image):
Deciding where and when to
use light probes is a critical problem; more
information about this
can be found in the
How it works…
To bake the
lights, follow the given steps:
To �nally bake the lights, open the
window and select its
checkbox is enabled, Unity will automatically execute the baking process
in the background. If not, click on
Light baking can take several hours even for a relatively small
scene. If you are constantly moving static objects or lights, Unity
will restart the process from scratch causing a severe slowdown
in the editor. You can uncheck the
checkbox from the
tab to prevent this so that you can decide
when to start the process manually.
How it works…
The most complicated part of the rendering is the light transport. During this phase, the GPU
calculates how the rays of light bounce between objects. If an object and its lights don't move,
this calculation can be done only once as it will never change during the game. Flagging an
is how you are telling Unity that such an optimization can be made.
Loosely speaking, light baking refers to the process of calculating the global illumination of a
saving it in what is called a
. Once baking is completed, lightmaps
Light baking comes at a great expense: memory. Every static surface is, in fact, retextured so
that it already includes its lighting condition. Let's imagine that you have a forest of trees, all
sharing the same texture. Once they are made static, each tree will have its very own texture.
Light baking not only increases the size of your game, but can take a lot of texture memory if
used indiscriminately.
introduced in this recipe is
. Light baking produces extremely
high-quality results for static geometries but does not work on moving objects. If your
character is entering in a static region, it can look somehow
from the environment.
Its shading will not match the surrounding, resulting in an aesthetically unpleasant result.
skinned mesh renderers
, will not
receive global illumination even
if made static. Baking lights in real time is not possible, although light probes offer an
effective alternative. Every light probe samples the global illumination at a speci�c point in
space. A light probe group can sample several points in space, allowing to interpolate global
illumination within a speci�c volume. This allows us to cast a better light on moving objects,
even despite the fact that global illumination has been calculated only for a few points. It is
important to remember that light probes need to enclose a volume in order to work. It is best
to place light probes in regions where there is a sudden change in the light condition. Similar
to lightmaps, probes consume memory and should be placed wisely; remember that they exist
only for non-static geometry.
Even while using light
probes, there are a few aspects that Unity's global illumination cannot
capture. Non-static objects, for instance, cannot re�ect light on other objects.
You can read
more about light probes at
Vertex Functions
The term
originates from the fact that Cg has been used mainly to simulate realistic
lighting conditions (shadows) on 3D models. Despite this, shaders are now much more
than that. They not only
de�ne the way the objects are going to look, they can also rede�ne
their shapes entirely. If you want to learn how to manipulate the geometry of a 3D object via
shaders, this is the chapter for you.
In this chapter, you will learn the following recipes:
Accessing a vertex color in a Surface Shader
Animating vertices in a Surface Shader
Extruding your models
Implementing a snow shader
Implementing a volumetric explosion
Chapter 1
Creating Your First Shader
, we explained that 3D models are not just a collection
of triangles. Each vertex can contain data that is essential to render the model itself correctly.
This chapter will explore how to access this information in order to use it in a shader. We will
also explore in detail how the geometry of an object can be deformed simply using Cg code.
Accessing a vertex color in a Surface
Let's begin this
chapter by taking a look at how we
can access the information of a model's
vertex using the vertex function in a Surface Shader. This will arm us with the knowledge
to start utilizing the elements contained within a model's vertex to create really useful and
visually appealing effects.
A vertex in a vertex function can return information about itself that we need to be aware of.
You can actually retrieve the vertices' normal directions as a
value, the position of the
vertex as
, and you can even store color values in each vertex and return that color as
. This is what we will take a look at in this recipe. We need to see how to store color
information and retrieve this stored color information inside each vertex of a Surface Shader.
Your scene should now look similar to the following screenshot:
How to do it…
With our scene, shader, and material created and ready to go, we can begin to write the code
for our shader. Launch the shader by double-clicking on it in the
tab in the Unity
editor. Perform the following steps:
As we are creating a very simple shader, we will not need to include any properties in
block. We will still include a global tint color, just to stay
with the other shaders
in this book. Enter the following code in the
block of your shader:
This next step tells Unity that we will be including a vertex function in our shader:
As usual, if we have included properties in our
block, we must make
sure to create a corresponding variable in our
statement. Enter the
following code just below the
We now turn our attention to the
struct. We need to add a new variable in
order for our
function to access the data given to us by our
Now, we can write our simple
function to gain access to the colors stored in
each vertex of our mesh:
Finally, we can use the vertex color data from our
struct to be assigned to the
parameters in the built-in
With our code completed, we can now re-enter the Unity editor and let the shader
compile. If all goes well, you should
see something similar to the following screenshot:
How it works…
Unity provides us with a way to access the vertex information of the model to which a shader
is attached. This gives us the power to modify things such as the vertices' position and color.
With this recipe, we have imported a mesh from Maya (though just about any 3D software
application can be used), where vertex colors were added to
. You'll notice that by
importing the model, the default material will not display the vertex colors. We actually have
to write a shader to extract the vertex color and display it on the surface of the model. Unity
provides us with a lot of built-in functionality when using Surface Shaders, which make the
process of extracting this vertex information quick and ef�cient.
Our �rst task is to tell Unity that we will be using a vertex function when creating our
shader. We do this by adding the
parameter to the
statement of
. This automatically makes Unity look for a vertex function named
goes to compile the shader. If it doesn't �nd one, Unity will throw a compiling error and ask you
to add a
function to your shader.
This brings us to our next step. We have to actually code the
seen in step 5. By having this function, we can access the built-in data struct called
. This built-in struct is where the vertex information is stored. So, we then
extract the vertex color information by passing it to our
struct by adding the code,
variable represents our
variable is our
data. In this case, we are simply taking the color information from the
struct. Once the vertex
color is in our
struct, we can use it
function. In the case of this recipe, we simply apply the color to the
parameter to the built-in
There's more…
One can also access a fourth component from the
color data. If you notice, the
variable we declared in the
struct is of the
type. This means that we are also
passing the alpha value of the vertex colors. Knowing this, you can use it to your advantage for
the purpose of storing a fourth vertex color to perform effects such as transparency or giving
yourself one more mask to blend two textures. It's really up to you and your production to
determine if you really need to use the fourth component, but it is worth mentioning here.
With Unity 5, we now have the ability to target shaders to DirectX 11. This is great, but it
means that the compiling process for the shaders is now a bit pickier. This means that we
need to include one more line of code to our shader to initialize the output of the vertex
information properly. The following code shows what the vertex function code looks like, if you
are using DirectX 11 in your shader:
By including this line of code, your Vertex Shader will not throw any warnings, which say that it
won't compile to DirectX 11 appropriately.
Animating vertices in a Surface Shader
Now that we
know how to access data on a per-vertex
basis, let's expand our knowledge set to
include other types of data and position of a vertex.
Using a vertex function, we can access the position of each vertex in a mesh. This allows us to
actually modify each individual vertex while the shader does the processing.
In this recipe, we will create a shader that will allow us to modify the positions of each vertex
on a mesh with a sine wave. This technique can be used to create animations for objects such
as �ags or waves on an ocean.
Your scene should look similar to the following screenshot:
How to do it…
With our scene ready to go, let's double-click on our newly created shader to open it in
Let's begin with our shader by populating the
We now need to tell Unity that we are going to be using a vertex function by adding
the following to the
In order to
access the values that have
been given to us by our properties, we need to
declare a corresponding variable in our
completing the code for your shader, switch back to
Unity and let the shader compile.
Once compiled, you should see something similar to the following screenshot:
How it works…
This particular shader uses the same concept from the last recipe, except that this time, we
are modifying the positions of the vertices in the mesh. This is really useful if you don't want
to rig up simple objects, such as a �ag, and then animate them using a skeleton structure or
hierarchy of transforms.
We simply create a sine wave value using the
function that is built into the Cg
language. After calculating this value, we add it to the
value of each vertex position, creating
a wave-like effect.
We also did a little bit of modi�cation to the normal on the mesh just to give it a more realistic
shading based on the sine wave value.
You will
see how easy it is to perform more complex
vertex effects by utilizing the built-in
vertex parameters that Surface Shaders give us.
Extruding your models
One of the biggest
problems in games is repetitions. Creating new content is a time-consuming
task, and when you have to face thousands of enemies, chances are that they will all look the
same. A relatively cheap technique to add variations to your models is using a shader that alters
its basic geometry. This recipe will show you a technique called
normal extrusion
, which can be
used to create a chubbier or skinnier version of a model, as shown in the following picture with
the soldier from the Unity camp demo:
directive so that it now uses a vertex modi�er. You can do this
by adding
at the end of it. In our case, we have called the
Add the following vertex modi�er:
The shader is now ready; you can use the
slider in the material's
tab to make your model skinnier or chubbier.
How it works…
Surface Shaders works in two steps. In all the previous chapters, we only explored its last
one: the surface
function. There is another function that can be used: the
vertex modi�er
It takes the data structure of a vertex (which is usually called
a transformation to it. This gives us the freedom to do virtually everything
with the geometry
of our model. We signal the
) that such a function exists by
to the
directive of the Surface Shader. You can refer to
Chapter 6
Fragment Shaders and Grab Passes
, to learn how vertex modi�ers can be de�ned
in a Vertex and Fragment Shader instead.
One of the most simple, yet effective, techniques that can be used to alter the geometry of
a model is called normal extrusion. It works by projecting a vertex along its normal direction.
This is done by the following line of code:
The position of a vertex is displaced by
units toward the vertex normal. If
gets too high, the results can be quite unpleasant. With smaller values, however, you can add
a lot of variations to your models.
There's more…
If you have multiple enemies and want each one to have its own
, you have to create a
different material for each one of them. This is necessary as materials are normally shared
between models and changing one will change all of them. There are several ways in which
you can do this; the quickest one is to create a script that automatically does it for you. The
following script, once
attached to an object with a
, will duplicate its �rst material
and set the
property automatically:

// Use this for initialization

In shaders, color
channels go from 0 to 1, although sometimes
you need to represent negative values as well (such as inward
extrusion). When this is the case, treat 0.5 as zero, having smaller
values considered as negative and higher values as positive. This
is exactly what happens with normals, which are usually encoded
in RGB textures. The
function is used to map
a value in the range (0,1) on the range (-1,+1). Mathematically
speaking, this is equivalent to
Extrusion maps are perfect to zombify characters by shrinking the skin to highlight the shape
of the bones underneath. The following picture shows you how a
transformed into a corpse using just a shader and extrusion map. Compared to the previous
example, you
can notice how the clothing is unaffected. The shader used in the following
picture also darkens the extruded regions to give an even more emaciated look to the soldier:
Implementing a snow shader
of snow has always been a challenge in games. The vast majority of games
simply includes snow directly in the model's textures so that their tops look white. However,
what if one of these objects starts rotating? Snow is not just a lick of paint on a surface; it is a
proper accumulation of material and should be treated as such. This recipe shows you how to
give a snowy look to your models using just a shader.
This effect is achieved in two steps. First, a white color is used for all the triangles facing the
sky. Second, their vertices are extruded to simulate the effect of snow accumulation. You can
see the result in the following picture:
Keep in mind
that this recipe does not aim to create a photorealistic
snow effect. It provides a good starting point, but it is up to an artist
to create the right textures and �nd the right parameters to make it
�t your game.
Replace the surface function with the following one. It will color the snowy parts of
the model white:
directive so that it uses vertex modi�ers:
Add the following vertex modi�ers, which extrude the vertices covered in snow:
You can now use
the material's
tab to select how much of your model is going to be
covered and how thick the snow should be.
How it works…
This shader works in two steps.
Coloring the surface
The �rst one
alters the color of the triangles that are facing the sky. It affects all the
triangles with a normal direction similar to
. As seen before in
Chapter 3
Understanding Lighting Models
, comparing unit vectors can be
When two vectors are orthogonal, their dot product is zero; it is one (or minus one) when they
are parallel to each other. The
property is used to decide how aligned they should be to
be considered facing the sky.
If you look closely at the surface function, you can see that we are not dotting the normal and
snow direction directly. This is because they are usually de�ned in a different space. The snow
direction is expressed in world coordinates, while the object normals are usually relative to the
model itself. If we rotate the model, its normals will not change, which is not what we want.
To �x this, we need to convert the normals from their object coordinates to world coordinates.
function, as seen in the following code:
This shader simply colors the model white; a more advanced one should initialize the
structure with textures and parameters from a realistic
snow material.
works well with the standard Unity Sphere, but if you need
big explosions, you might need to use a more high-poly sphere. In
fact, a vertex function can only modify the vertices of a mesh. All the
other points will be interpolated using the positions of the nearby
vertices. Fewer vertices mean a lower resolution for your explosions.
For this recipe, you will also need a ramp texture that has, in a gradient, all the colors
your explosions will have. You can create a texture like the following image
Once you have the picture, import it to Unity. Then, from its
, make sure that
Filter Mode
is set to
Wrap Mode
. These two settings make
sure that the ramp texture is sampled smoothly.
Lastly, you will need a noisy texture. You can search on the Internet for freely available
. The most commonly used ones are generated
Perlin noise
How to do it…
This effect works in two steps: a vertex function to change the geometry and surface function
to give it the right color. The steps are as follows:
Add the following properties to the shader:
We specify the vertex function in the
directive, adding the
parameter to prevent Unity from adding realistic lightings to our explosion:
The last step is selecting the material, and from its
, attaching the two
textures in the relative slots. This is an animated material, meaning that it evolves
over time. You can watch the material changing in the editor by clicking on
from the
How it works…
If you are reading this recipe, you should already be familiar with how Surface Shaders and
vertex modi�ers
work. The main idea behind this effect is to alter the geometry of the sphere
in a seemingly chaotic way, exactly like it happens in a real explosion. The following picture
shows you how such an explosion will look inside the editor. You can see that the original
mesh has been heavily deformed:
The vertex
function is a variant of the technique called
normal extrusion
introduced in the
Extruding your models
recipe of this chapter. The difference here is that the amount of the
extrusion is determined both by the time and noise texture.
When you need a random number in Unity, you can rely on the
function. There is no standard way to get
random numbers in a shader, so the easiest way is to sample a
noise texture.
standard way to do this, so take this as an example only:
)The built-in _Time[3] variable is used to get the current time from within the shader, and
the red channel of the noise texture
is used to make sure that each vertex moves
independently. The
function makes the vertices go up and down, simulating the
chaotic behavior of an explosion. Then, the normal extrusion takes place:
You should play with these numbers and variables until you �nd a pattern of movement that
you are happy with.
The last part of the effect is achieved by the surface function. Here, the noise texture is used
to sample a random color from the ramp texture. However, there are two more aspects that
are worth noticing. The �rst one is the introduction of
It is worth noticing, however, that all the objects with the same material share the same look.
If you have multiple explosions at the same time, they should not use the same material.
When you are instantiating a new explosion, you should also duplicate its material. You can do
If you are
looking for high-quality explosions, check out
Fragment Shaders and
Grab Passes
So far, we have relied on Surface Shaders. They have been designed to simplify the way
shader coding works, providing meaningful tools for artists. If we want to push our knowledge
of shaders further, we need to venture into the territory of Vertex and Fragment Shaders.
In this chapter, you will learn the following recipes:
Understanding Vertex and Fragment Shaders
Implementing a Glass Shader
Implementing a Water Shader for 2D games
Compared to Surface Shaders, Vertex and Fragment Shaders come with little to no
information about the physical properties that determine how light re�ects on surfaces. What
they lack in expressivity, they compensate with power: Vertex and Fragment Shaders are not
limited by physical constraints and are perfect for non-photorealistic effects. This chapter will
focus on a technique
, which allows these shaders to simulate deformations.
Understanding Vertex and Fragment
The best way to
understand how Vertex and Fragment Shaders work is by creating one
yourself. This recipe will show you how to write one of these shaders, which will simply apply a
texture to a model and multiply it by a given color, as shown in the following image:
The shader presented here is very simple, and it will be used as a starting base for all the
other Vertex and Fragment Shaders.
How to do it…
In all the previous
chapters, we have always been able to re�t Surface Shaders. This is not the
case anymore as Surface and Fragment Shaders are structurally different. We will need the
following changes:
Delete all the properties of the shader, replacing them with the following:
Delete all the code in the
block and replace it with this one:
How it works…
suggests, Vertex and Fragment Shaders work in two steps. The model is �rst
passed through a
vertex function
inputted to a
. Both
In this case, they are simply called
Conceptually speaking, fragments are closely related to pixels; the term
is often
used to refer to the collection
of data necessary to
draw a pixel. This is also why Vertex
Fragment Shaders are often called
Pixel Shaders
The vertex function takes the input data in a structure that is de�ned as
Its name is totally arbitrary, but its content is not. Each �eld of
must be decorated
. This is a feature of Cg that allows us to mark variables so that they
will be initialized with certain data, such as normal
vectors and vertex position. The binding
indicates that when
is inputted to the vertex function,
will contain the position of the current vertex. This is similar to the
structure in a Surface Shader. The main difference is that
is represented
in model coordinates (relative to the 3D object), which we need to convert to view coordinates
manually (relative to the position on the screen).
The vertex function in a Surface Shader is used to alter
the geometry of the model only. In a Vertex and Fragment
Shader, instead, the vertex function is necessary to project
the coordinates of the model to the screen.
The mathematics behind this conversion is beyond the scope of this chapter. However, this
can be achieved by multiplying
by a special matrix provided by Unity:
. It is often referred to as the
model-view-projection matrix
essential to �nd the
position of a vertex on the screen:
The other piece
of information initialized is
semantics to get the UV data of the �rst texture. No further processing is required and this
value can be passed directly to the fragment function:
While Unity will initialise
for us, we are responsible for the initialization of
. Despite this, its �elds still need to be decorated with binding semantics:
Once the vertex function has initialised
, the structure is passed to the fragment
function. This samples the main texture of the model and multiplies it by the color provided.
As you can see, the Vertex and Fragment Shader has no knowledge of the physical properties
of the material; compared to a Surface Shader, it
works closer to the architecture
There's more…
most confusing aspects of Vertex and Fragment Shaders is binding semantics.
There are many others that you can use and their meaning depends on the context.
semantics in the following table can be used in
structure that Unity provides to the vertex function. The �elds decorated with these semantics
will be initialized automatically:
The position of a vertex in world coordinates (object
The normal of a vertex, relative to the world (not to
; they do not automatically guarantee that
�elds will be initialized. Quite the opposite; it's our responsibility to do so. The compiler will
make its best to ensure that the �elds are initialized with the right data:
The position of a vertex in camera coordinates
(clip space, from zero to one for each dimension)
float4 vertex : POSITION;

float4 vertex : POSITION;
// Vertex function
vertInput vert(vertexInput v) {
half4 frag(vertexOutput i) : COLOR {
In order to sample this texture, we need its UV data. The
returns data that we
can use later to sample the grab texture correctly. This is done in the
Fragment Shader using the following line:
This is the standard way in which a texture is grabbed and applied to the screen in its correct
position. If everything has been done correctly, this shader will simply clone what has been
rendered behind the geometry. We will see in the following recipes how this technique can be
used to create materials such as water and glass.
There's more…
Every time you use a material with
, Unity will have to render the screen to a
texture. This operation is very expensive and limits the number of grab passes that you can
use in a game. Cg offers a slightly different variation:
This line not only allows you to give a name to the texture, but it also shares the texture with
all the materials that have a grab pass called
. This means that if you have
ten materials, Unity will only do a single grab pass and share the texture to all of them. The
main problem of this technique is that it doesn't allow effects that can be stacked. If you are
with this technique, you won't be able to have two glasses one after the other.
Glass is a very
complicated material; it should not be a surprise that other chapters have
already created shaders to simulate it in the
Adding transparency to PBR
Creating Test Cases and Writing Scenarios for Behavior Driven Development in Symfony
However, there is an effect that transparency cannot reproduce deformations. Most glasses
are not perfect, hence they create distortions when we look through them. This recipe will
teach you how to do this. The idea behind this effect is to use a Vertex and Fragment Shader
with a grab pass, and then sample the grab texture with a little change to its UV data to create
a distortion. You can see the effect in the following image, using the glass-stained textures
from the
Unity Standard Assets
How to do it…
Let's start by editing the Vertex and Fragment Shaders:
Add these two properties to the
Add their variables in the second pass:
Add the texture information in the input and output structures:
Transfer the UV data from the input to the output structure:
Use the following fragment function:
How it works…
The core that this shader uses is a grab pass to take what has already been rendered on the
screen. The part where the distortion takes place is in the fragment function. Here, a normal
map is unpacked and used to offset the UV data of the grab texture:
slide is used to determine how strong the effect is.
There's more…
This effect is very generic; it grabs the screen and creates a distortion based on a normal map.
There is no reason
why it shouldn't be used to simulate more interesting things. Many games use
distortions around explosions or other sci-� devices. This material can be applied to a sphere and,
with a different normal map, it would simulate the heat wave of an explosion perfectly.
Implementing a Water Shader for 2D games
Shader introduced in the previous recipe is
static; its distortion never changes.
It takes just a few changes to convert it to an animated material, making it perfect for 2D
games, which feature water. This recipe uses a similar technique to the one shown in
Animating Vertices in a Surface Shader
How to do it…
To create this
animated effect, you can start by
re�tting the shader. Follow these steps:
Add the following properties:
fixed4 col = tex2Dproj( _GrabTexture, UNITY_PROJ_COORD(i.
variable determines the size of the waves. This solution is closer to the �nal
version, but has a severe issue—if the water quad moves, the UV data follows it and you can
see the water waves following the material rather than being anchored to the background. To
solve this, we need to use the world position of the current fragment as the initial position for
As it happens
with all these special effects, there is no perfect solution.
This recipe shows you a technique to create water-like distortion, but
you are encouraged to play with it until you �nd an effect that �ts the
aesthetics of your game.
In the next two chapters, we are going to take a look at making the shaders that we write
performance-friendly for different platforms. We won't be talking about any one platform
speci�cally, but we are going to break down the elements of shaders we can adjust to make
them more optimized for mobiles and ef�cient on any platform in general. These techniques
range from understanding what Unity offers in terms of built-in variables that reduce the
overhead of the shaders memory to learning about ways in which we can make our own
shader code more ef�cient. This chapter will cover the following recipes:
Pro�ling your shaders
Modifying our shaders for mobile
Learning the art of optimizing your shaders will come up in just about any game project that
you work on. There will always come a point in any production where a shader needs to be
optimized, or maybe it needs to use less textures but produce the same effect. As a technical
artist or shader programmer, you have to understand these core fundamentals to optimize
your shaders so that you can increase the performance of your game while still achieving the
same visual �delity. Having this knowledge can also help in setting the way in which you write
your shader from the start. For instance, by knowing that the game built using your shader
will be played on a mobile device, we can automatically set all our lighting functions to use
a half vector as the view direction or set all of our �oat variable types to �xed or half. These,
and many other techniques, all contribute to your shaders running ef�ciently on your target
hardware. Let's begin our journey and start learning how to optimize our shaders.
What is a cheap shader?
When �rst asked the
question, what is a cheap shader, it might be a little tough to answer as
there are many elements that go into making a more ef�cient shader. It could be the amount
of memory used up by your variables. It could be the amount of textures the shader is using.
It could also be that our shader is working �ne, but we can actually produce the same visual
effect with half the amount of data by reducing the amount of code we are using or data we
are creating. We are going to explore a few of these techniques in this recipe and show how
they can be combined to make your shader fast and ef�cient but still produce the high-quality
visuals everyone expects from games today, whether on a mobile or PC.
Finally, modify the shader so that it uses a diffuse texture and normal map and
includes your own custom lighting function. The following image shows the result of
modifying our default shader that we created in step 1:
You should now
have a setup similar to the following image. This setup will allow us to take
a look at some of the basic concepts that go into optimizing shaders using Surface Shaders
in Unity:
How to do it…
We are going to build a simple Diffuse shader to take a look at a few ways in which you can
optimize your shaders in general.
First, we'll optimize
our variable types so that they use less memory when they
processing data:
Let's begin with the
in our shader. Currently, our UVs are being stored
in a variable of the
type. We need to change this to use
We can then move to our lighting function and reduce the variable's memory footprint
by changing their types to the following:
Finally, we can
complete this optimization pass by updating the
variables in our
Now that we have our variables optimized, we are going to take advantage of a built-in lighting
function variable so that we can control how lights are processed by this shader. By doing
this, we can greatly reduce the amount of lights the shader processes. Modify the
statement in your shader with the following code:
We can optimize this further by sharing UVs between the normal map and diffuse texture. To
do this, we simply change the UV lookup in our
function to use
instead of the UVs of
As we have removed the need for the normal map UVs, we need to make sure that we
the normal map UV code from the
Finally, we can further optimize this shader by telling the shader that it only works
with certain renderers:
The result of our optimization passes show us that we really don't notice a difference in the
visual quality, but we have reduced the amount of time it takes for this shader to be drawn
to the screen. You will learn about �nding out how much time it takes for a shader to render
in the next recipe, but the idea to focus on here is that we achieve the same result with less
data. So keep this in mind when creating your shaders. The following image shows us the �nal
How it works…
Now that we have seen the ways in which we can optimize our shaders, let's dive in a bit
deeper and really understand why all of these techniques are working and look at a couple of
other techniques that you can try for yourself.
Let's �rst focus our attention on the size of the data each of our variables is storing when we
declare them. If you are familiar with programming, then you will understand that you can
declare values or
variables with different sizes of types. This means that a �oat
has a maximum size in memory. The following description will describe these variable types
in much more detail:
: A �oat is a full 32-bit precision value and is the slowest of the three
different types we see here. It also has its corresponding values of
: The half variable type is a reduced 16-bit �oating point value and is suitable to
store UV values and color values and is much faster than using a �oat value. It has its
corresponding values like the �oat type, which are
: A �xed value is the smallest in size of the three types, but can be used for
lighting calculations and colors and has the corresponding values of
Our second phase of optimizing our simple shader was to declare the
to our
statement. This is basically a switch that automatically tells Unity that any
object with this particular shader receives only per-pixel light from a single directional light.
Any other lights that are calculated by this shader will be forced to be processed as per-vertex
lights using Spherical Harmonic values produced internally by Unity. This is especially obvious
when we place another light in the scene to light our sphere object because our shader is
doing a per-pixel operation using the normal map.
This is great, but what if you wanted to have a bunch of directional lights in the scene and
control over which of these lights is used for the main per-pixel light? Well, if you notice, each
Render Mode
drop-down. If you click on this drop-down, you will see a couple of
�ags that can be set. These are
Not Important
. By selecting a light, you
can tell Unity that a light should be considered more as a per-pixel light than a per-vertex light,
by setting its render mode to
and vice versa. If you leave a light set to
you will let Unity decide the best course of action.
Place another light in your scene and remove the texture that is currently in the main texture
for our shader. You will notice that the second point light does not react with the normal
directional light that we created �rst. The concept here is that you save on
per-pixel operations by just calculating all extra lights as vertex lights, and save performance
by just calculating the main directional light as a per-pixel light. The following image visually
demonstrates this concept as the point light is not reacting with the normal map:
Finally, we did a bit of cleaning up and simply told the normal map texture to use the main
texture's UV values, and we got rid of the line of code that pulled in a separate set of UV
values speci�cally for the normal map. This is always a nice way to simplify your code and
clean up any unwanted data.
We also declared
statement so that this
shader wouldn't accept any custom lighting from the deferred renderer. This means that we
can really use this shader effectively in the forward renderer only, which is set in the main
camera's settings.
By taking a bit of time, you will be amazed at how much a shader can be optimized. You have
seen how we can pack grayscale textures into a single RGBA texture as well as use lookup
textures to fake lighting. There are many ways in which a shader can be optimized, which is
why it is always an ambiguous question to ask in the �rst place, but knowing these different
optimization techniques, you can cater your shaders to your game and target platform,
ultimately resulting in very streamlined shaders and a nice steady framerate.
Pro�ling your shaders
Now that we know
how we can reduce the overhead that our shaders might take, let's take
a look at how to �nd problematic shaders in a scene where you might have a lot of shaders
or a ton of objects, shaders, and scripts, all running at the same time. To �nd a single object
or shader among a whole game can be quite daunting, but Unity provides us with its built-in
Pro�ler. This allows us to actually see, on a frame-by-frame basis, what is happening in the
game and each item being used by the GPU and CPU.
Using the Pro�ler, we can isolate items such as shaders, geometry, and general rendering
items using its interface to create blocks of pro�ling jobs. We can �lter out items till we are
looking at the performance of just a single object. This then lets us see the effects on the CPU
and GPU that the object has while it is performing its functions at runtime.
Let's take a look through the different sections of the Pro�ler and learn how to debug our
scenes and, most importantly, our shaders.
How to do it…
To use the Pro�ler, we
will take a look at some of the UI elements of this window. Before we hit
play, let's take a look at how to get the information we need
from the pro�ler:
First, click on the larger blocks in the
window called
. You will �nd these blocks on the left-hand side of the upper window:
Using these blocks, we can see different data speci�c to those major functions of our
is showing us what most of our scripts are doing as well as
physics and overall rendering. The
block is giving us detailed information
about the elements that are speci�c to our lighting, shadows, and render queues.
Finally, the
block is giving us information about the drawcalls and amount
of geometry we have in our scene at any one frame.
By clicking on each of these blocks, we can isolate the type of data we see during our
pro�ling session.
Now, click on the tiny colored blocks in one of these
blocks and hit play or
to run the scene.
This lets us dive down even deeper into our pro�ling session so that we can �lter out
what is being reported back for us. While the scene is running, uncheck all of the
boxes, except for
block. Notice that we can now see just
how much time is being used to render the objects that are set to the Render Queue
of Opaque:
Another great function of the
window is the action of clicking and dragging
in the graph view. This will automatically pause your game so that you can
analyze a certain spike in the graph to
�nd out exactly which item is causing the
performance problem. Click and drag around in the graph view to pause the game
and see the effect of using this functionality:
Turning our attention now towards the lower half of the
window, you will
notice that there is a drop-down item available when we have the GPU Block selected.
We can expand this to get even more detailed information about the current active
pro�ling session and, in this case, more information about what the camera is
currently rendering and how much time it is taking up:
This gives us a complete look at the inner workings of what Unity is processing in this
particular frame. In this case, we can see that our three spheres with our
shader are taking roughly 0.14 milliseconds to draw to the screen, they are taking
up seven drawcalls, and this process is taking 3.1 percent of the GPU's time in every
this type of information we can use to diagnose and solve performance
issues with regard to shaders. Let's conduct a test to see the effects of adding one
more texture to our shader and blending two diffuse textures together using a
function. You will see the effects in the pro�ler pretty clearly.
block of your shader with the following code to give us
another texture to use:
Then let's feed our texture to
Now it's
time to update our
function accordingly so that we blend our texture
diffuse textures together:
Once you save your
modi�cations in your shader and return to Unity's editor, we can run our
game and see the increase in milliseconds of our new shader. Press play once you have
returned to Unity and let's take a look at the results in our
You can see now that the amount of time to render our Opaque Shaders in this scene is
milliseconds, up from 0.140 milliseconds. By adding another texture and using
function, we increased the render time for our spheres. While it's a small change,
having 20 shaders all working in different ways on different objects.
Using the information
given here, you can pinpoint areas that are causing performance
decreases more quickly and solve these issues using the techniques from the previous recipe.
How it works…
While it's completely out of scope of this book to describe how this tool actually works
internally, we can surmise that Unity has given us a way to view the computer's performance
while our game is running. Basically, this window is tied very tightly to the CPU and GPU to give
us real-time feedback of how much time is being taken for each of our scripts, objects, and
render queues. Using this information, we have seen that we can track the ef�ciency of our
shader writing to eliminate problematic areas and code.
There's more…
It is also possible to pro�le speci�cally for mobile platforms. Unity provides us with a couple of
extra features when the Android or IOS build target is set in the Build Settings. We can actually
get real-time information from our mobile devices while the game is running. This becomes
very useful because you are able to pro�le directly on the device itself instead of
directly in your editor. To �nd out more about this process, refer to
Unity's documentation at
the following link:
Modifying our shaders for mobile
Now that we have
seen quite a broad set of techniques to make really optimized shaders, let's
take a look at writing a nice, high-quality shader targeted for a mobile device. It is actually
quite easy to make a few adjustments to the shaders we have written so that they run faster
on a mobile device. This includes elements such as using the
lighting function variables. We can also reduce the amount of textures we need and even
apply better compression for the textures we are using. By the end of this recipe, we will have
a nicely optimized normal-mapped, Specular shader for use in our mobile games.
How to do it…
For this recipe, we will write a mobile-friendly shader from scratch and discuss the elements
that make it more mobile-friendly:
Let's �rst populate our
block with the needed textures. In this case, we
going to use a single
texture with the gloss map in its alpha channel,
normal map, and slider for specular intensity:
Our next task is to set up our
declarations. This will simply turn certain
features of the Surface Shader on and off, ultimately making the shader cheaper or
more expensive:
We then need to make the connection between our
. This time, we are going to use the �xed variable type for our specular
intensity slider to reduce its memory usage:
In order for us to map our textures to the surface of our object, we need to get some
UVs. In this case, we are going to get only one set of UVs to keep the amount of data
in our shader down to a minimum:
The next step is to �ll in our lighting function using a few new input variables that
available to us using the new
Finally, we complete the shader by creating the
function and processing the
�nal color of our surface:
When completed with the code portion of this recipe, save your shader and return to the Unity
editor to let the shader compile. If no errors occurred, you should see a result similar to the
following image:
How it works…
So, let's begin the description of this shader by explaining what it does and doesn't do. First, it
excludes the deferred lighting pass. This means that if you created a lighting function that was
connected to the deferred renderer's prepass, it wouldn't use that particular lighting function
and would look for the default lighting function like the ones that we have been creating thus
far in this book.
This particular shader
does not support
by Unity's internal light-mapping
system. This just keeps the shader from trying to �nd light maps for the object that the shader
is attached to, making the shader more performance friendly because it is not having to
perform the lightmapping check.
We included the
declaration so that we process only per-pixel textures with
a single directional light. All other lights are forced to become per-vertex lights and will not be
included in any per-pixel operations you might do in the
Finally, we are using the
declaration to tell Unity that we aren't going to use the
parameter found in a normal lighting function. Instead, we are going to use the
half vector as the view direction and process our specular with this. This becomes much faster
for the shader to process as it will be done on a per-vertex basis. It isn't completely accurate
when it comes to simulating specular in the real world, but visually on a mobile device, it looks
Its techniques like these that make a shader more ef�cient and cleaner, codewise. Always
make sure that you are using only the data you need while weighing this against your target
hardware and the visual quality that the game requires. In the end, it becomes a cocktail of
techniques that ultimately make up your shaders for your games.
Screen Effects with
Unity Render Textures
In this chapter, you will learn the following recipes:
Setting up the screen effects script system
Using brightness, saturation, and contrast with screen effects
Using basic Photoshop-like Blend modes with screen effects
Using the Overlay Blend mode with screen effects
One of the most impressive aspects of learning to write shaders is the process of creating
your own screen effects, also
known as post effects. With these screen effects, we can create
stunning real-time images with Bloom, Motion Blur, HDR effects, and so on. Most modern
games out in the market today make heavy use of these Screen effects for their depth of �eld
effects, bloom effects, and even color correction effects.
Throughout this chapter, you will learn how to build up the script system that gives us the
control to create these screen effects. We will cover Render Textures, what the depth buffer is,
and how to create effects that give you Photoshop-like control over the �nal rendered image
of your game. By utilizing screen effects for your games, you not only round out your shader
writing knowledge, but you will also have the power to create your own incredible real-time
renders with Unity.
With all of our assets ready, you should have a simple scene setup, which looks similar to the
following image:
How to do it…
In order to make our grayscale screen effect work, we need a script and shader. So, we will
complete these two new items here and �ll them in with the appropriate code to produce our
screen effect. Our �rst task is to complete the C# script. This will get the whole system
running. After this, we will complete the shader and see the results of our Screen Effect. Let's
complete our script and shader with the following steps:
C# script and let's begin by entering a few
variables that we will need to store important objects and data. Enter the following
code at the very top of the
In order for us to edit the Screen Effect in real time, when the Unity editor isn't
playing, we need to enter the following line of code just above the declaration of the
Screen Effect is using a shader to perform the pixel operations on our Screen
image, we have to create a material to run the shader. Without this, we can't access
the properties of the shader. For this, we will create a C# property to check for a
material, and create one if it doesn't �nd one. Enter the following code just after the
declaration of the variables from step 1:
We now want to set up some checks in our script to see if the current target platform
that we are building the Unity game on actually supports image effects. If it doesn't
�nd anything at the start of this script, then the script will disable itself:
To actually grab the Rendered Image from the Unity Renderer, we need to make use
following built-in function that Unity provides us, called
Enter the following code so that we can have access to the current Render Texture:
Our Screen effect has a variable called
with which we can
control how much grayscale we want for our �nal Screen Effect. So, in this case, we
need to make the value go from 0 – 1, where 0 is no grayscale effect and 1 is full
grayscale effect. We will perform this operation in the
sets every frame this script is running:
Finally, we complete our script by doing a little bit of clean up on objects we created
when the script started:
At this point, we can now apply this script to the camera, if it compiled without errors,
in Unity. Let's apply the
script to our main
scene. You should see the
value and a �eld for a shader, but the
script throws an error to the console window. It says that it is missing an instance to
an object and so won't process appropriately. If you recall from step 4, we are doing
some checks to see whether we have a shader and the current platform supports the
shader. As we haven't given the Screen Effect script a shader to work with, then the
variable is just null, which throws the error. Let's continue our Screen
Effects system by completing the shader.
To begin our shader, we will populate our properties with some variables so that we
can send data to this shader:
Our shader is now going to utilize pure CG shader code instead of utilizing Unity's
built-in Surface Shader code. This will make our Screen Effect more optimized as we
need to work only with the pixels of the Render Texture. So, we will create a new
block in our shader and �ll it with some new
statements that we haven't
seen before:
In order to access the data being sent to the shader from the Unity editor, we need to
create the corresponding variables in our
Finally, all we need to do is set up our pixel function, in this case, called
is where the meat of the Screen Effect is. This function will process each pixel of the
Render Texture and return a new image to our
Once the shader is complete, return to Unity and let it compile to see if any errors occurred. If
not, assign the new shader to the
script and change the value of the
grayscale amount variable. You should see the game view go from a colored version of the game
to a grayscale version of the game. The following image demonstrates this Screen Effect:
complete, we now have an easy way to test out new Screen Effect shaders without
having to write our whole Screen Effect system over and over again. Let's dive in a little deeper
and learn about what's going on with the Render Texture and how it is processed throughout
its existence.
How it works…
To get a screen effect up and running inside of Unity, we need to create a script and shader.
The script drives the real-time update in the editor and is also responsible for capturing the
Render Texture from the main camera and passing it to the shader. Once the render texture
gets to the shader, we can use the shader to perform per-pixel operations.
At the start of the script, we perform a few checks to make sure that the current selected build
platform actually supports screen effects and the shader itself. There are instances where
a current platform will not support Screen Effects or the shader that we are using. So the
checks that we do in the
function make sure we don't get any errors if the platform
doesn't support the screen system.
Once the script passes these checks, we initiate the Screen Effects system by calling the built-
. This function is responsible for grabbing the renderTexture,
giving it to the shader using the
function, and returning the processed
image to the Unity renderer. You can �nd more information on these two functions at the
following URLs:
render texture reaches the shader, the shader takes it, processes it through
function, and returns the �nal color for each pixel.
You can see how powerful this becomes as it gives us Photoshop-like control over the �nal
rendered image of our game. These screen effects work sequentially like Photoshop layers in
the camera. When you place these screen effects one after the other, they will be processed
in that order. These are just the bare bones steps to get a screen effect working, but it
core of how the screen effects system works.
There's more…
Now that we have our simple Screen Effect system up and running, let's take a look at some
of the other useful information we can obtain from Unity's renderer:
We can actually get the depth of everything in our current game by turning on Unity's built-in
Depth mode. Once this is turned on, we can use the depth information for a ton of different
effects. Let's take a look at how this is done:
Create a new shader and call it
shader to open it in the
We will create the Main Texture property and a property to control the power of the
scene depth effect. Enter the following code in your shader:
Now we need to create the corresponding variables in our
. We are going
to add one more variable called
. This is a built-in variable
that Unity has provided us with through the use of the UnityCG
gives us the depth information from the camera:
We will complete our depth shader by utilizing a couple of built-in functions that
Unity provides us with, the
functions. The �rst function actually gets the depth information from our
and produces a single �oat value for each pixel. The
function then makes sure that the values are within the 0-1
range by taking this �nal depth value to a power we can control, where the mid-value
on the 0-1 range sits in the scene based off of the camera position:
With our shader complete, let's turn our attention to our Screen Effects script. We
need to add the
variable to the script so that we can let users change
the value in the editor:
function then needs to be updated so that it is passing the
right value to our shader:
To complete our depth Screen effect, we need to tell Unity to turn on the depth
rendering in the current camera. This is done by simply setting the main camera's
With all the code set up, save your script and shader and return to Unity to let them both
compile. If no errors
are encountered, you should see a result similar to the following image:
Using brightness, saturation, and contrast
with screen effects
Now that
we have our screen effects system up and running, we
can explore how to create
more involved pixel operations to perform some of the more common Screen Effects found in
games today.
To begin, using
a screen effect to adjust the overall �nal colors of your game is crucial in giving
artists a global control over the �nal look of the game. Techniques such as color adjustment
sliders to adjust the intensity for the reds, blues, and greens of the �nal rendered game or
techniques like putting a certain tone of color over the whole screen as seen in something like
a sepia �lm effect.
For this particular recipe, we are going to cover some of the more core color adjustment
operations we can perform on an image. These are brightness, saturation, and contrast. Learning
how to code these color adjustments gives us a nice base to learn the art of screen effects.
When completed, you should have a scene similar to the following image:
How to do it…
Now that we have completed our scene setup and created our new script and shader, we can
begin to �ll
in the code necessary to achieve the brightness, saturation, and contrast Screen
Effect. We will be focusing on just the pixel operation and
variable setup for our script and
shader, as getting a Screen
Effect system up and running is described in the
Setting up the
screen effects script system
Let's begin by launching our new shader and script in MonoDevelop. Simply double-
click on the two �les in the project view to perform this action.
Editing the shader �rst makes more sense so that we know what kind of variables
we will need for our C# script. Let's begin this by entering the appropriate properties
for our brightness, saturation, and contrast effect. Remember, we need to keep
property in our shader as this is the property that the RenderTexture
targets when creating Screen Effects:
order for us to access the data coming in
from our properties in our
, we
need to create the corresponding variables in the
Now we need to create the operations that will perform the brightness, saturation,
and contrast effects. Enter the following new function in our shader, just above the
function. Don't worry if it doesn't make sense just yet; all the code will be
explained in the next recipe:
Finally, we
just need to update our
function to
function. This will process all the pixels of
Render Texture and pass it back to our script:
With the code entered in the shader, return to the Unity editor to let the new shader compile.
If there are no errors, we can return to MonoDevelop to work on our script. Let's begin this by
creating a couple of new lines of code that will send the proper data to our shader:
Our �rst step in modifying our script is to add the proper variables that will drive the
values of our Screen Effect. In this case, we
will need a slider for brightness, a
for saturation, and a slider for contrast:
With our variables set up, we now need to tell the script to pass their data to the
shader. We do this in the
Finally, all we need to do is clamp the values of the variables within a range that is
reasonable. These clamp values are entirely preferential, so you can use whichever
values you see �t:
completed and shader �nished, we simply assign our
script to our main
camera and our shader to the script, and
you should see the effects of brightness, saturation,
and contrast by manipulating the slider values. The following image shows a result you can
achieve with this screen effect:
The following image shows another example of what can be done by adjusting the colors of
How it works…
Since we now know how the basic Screen Effects system works, let's just cover the per-pixel
operations we created in the
The function starts by taking a few arguments. The �rst and most important is the current
render texture. The
other arguments simply adjust the overall effect of the
screen effect and
are represented by sliders in the screen
tab. Once the function receives the
render texture and the adjustment values, it declares a few constant values that we use to
modify and compare against the original render texture.
variable stores the values that will give us the overall brightness of
pretty standard throughout the industry. We can �nd the overall brightness of the image by
getting the dot product of the current image dotted with these luminance coef�cients. Once
we have the brightness, we simply use a couple of
functions to blend from the grayscale
version of the brightness operation and the original image multiplied by the brightness value,
being passed into the function.
The screen effects, like this one, are crucial to achieve high-quality graphics for your games
as it lets you tweak the �nal look of your game without having to edit each material in your
Using basic Photoshop-like Blend modes
with screen effects
The screen effects
aren't just limited to adjusting the colors of a rendered image from
our game. We can also use them to combine other images with our Render Texture. This
technique is no different than creating a new layer in Photoshop and choosing a blend mode
to blend two images together or, in our case, a texture with a Render Texture. This becomes a
very powerful technique as it gives the artists in a production environment a way to simulate
their blending modes in the game rather than just in Photoshop.
For this particular recipe, we are going to take a look at some of the more common blend
. You will see how simple it is to have the power
of Photoshop Blend modes in your game.
How to do it…
Our �rst
blend mode that we will implement is the
blend mode as seen in Photoshop.
Let's begin by modifying the code in our shader �rst.
by double-clicking on it in Unity's project view.
We need to add some new properties so that we have a texture to blend with and a
slider for an opacity value. Enter the following code in your new shader:
Enter the corresponding variables in our
so that we can access the data
from our
Finally, we
function so that it performs the multiply operation on
our two textures:
Save the shader and return to the Unity editor to let the new shader code compile
and check for errors. If no errors occurred, then double-click on the C# script �le to
launch it in the MonoDevelop editor.
In our script �le as well, we need to create the corresponding variables. So we will
need a texture so that we can assign one to the shader and a slider to adjust the �nal
amount of the blend mode we want to use:
We then
need to send our variable data to the shader through the
To complete the script, we simply �ll in our
function so that we can clamp
the value of the
variable between a value of
With this complete, we assign the screen effect script to our main camera and our screen
effect shader to our script so that it has a shader to use for the per-pixel operations. Finally, in
order for the effect to be fully functional, the script and shader is looking for a texture. You can
assign any texture to the texture �eld in the
for the screen effect script. Once this
texture is in place, you will see the effect of multiplying this texture over
image. The following image demonstrates the screen effect:
The following image demonstrates a higher intensity of opacity, making the multiplied image
much more apparent over our render image:
With our �rst blend mode set up, we can begin to add a couple of simpler blend modes to
get a better understanding of how easy it is to add more effects and really �ne-tune the �nal
result in your game. However, �rst let's break down what is happening here.
How it works…
Now we are starting to gain a ton of power and �exibility in our Screen Effects programming.
I am sure that you are now starting to understand how much one can do with this simple
system in Unity. We can literally replicate the effects of Photoshop layer blending modes in our
game to give artists the �exibility they need to achieve high-quality graphics in a short amount
particular recipe, we looked at how to multiply two images together, add two images
together, and perform a screen blending mode, using just a little bit of mathematics. When
working with blend modes, one has to think on a per-pixel level. For instance, when we are
using a multiply blend mode, we literally take each pixel from the original render texture and
multiply them with each pixel of the blend texture. The same goes for the add blend mode.
It is just a simple mathematical operation of adding each pixel from the source texture, or
render texture, to the blend texture.
The screen blend mode is de�nitely a bit more involved, but it is actually doing the same
thing. It takes each image, render texture, and blend texture, inverts them, then multiplies
them together, and inverts them again to achieve the �nal look. Just like Photoshop blends its
textures together using blend modes, we can do the same with screen effects.
There's more…
Let's continue this recipe by adding a couple of more blend modes to our screen effect.
In the screen effect shader, let's add the following code to our
the value we are returning to our script. We will also need to comment out the multiply blend
so that we don't return that as well:
the shader �le in MonoDevelop and return to the Unity editor to let the shader
compile. If no errors occurred, you should see a result similar to the following image.
This is a simple add blending mode:
As you can see, this has the opposite effect of multiply because we are adding the
two images together.
Finally, let's add one more blend mode called a Screen Blend. This one is a little bit
more involved, from a mathematical standpoint, but still simple to implement. Enter
the following code in the
The following image demonstrates the results of using a Screen type blend mode to blend two
images together in a screen effect:
Using the Overlay Blend mode with screen
For our
�nal recipe, we are going to take a look at another
type of blend mode, the Overlay
Blend mode. This blending actually makes use of some conditional statements that determine
the �nal color of each pixel in each channel. So, the process of using this type of blend mode
needs a bit more coding to work. Let's take a look at how this is done in the next few recipes.
How to do it…
To begin our Overlay Screen Effect, we will need to get the code of our shader up and running
without errors. We can then modify our script �le to feed the correct data to the shader.
We �rst
need to set up our properties in our
block. We will use the
same properties from the previous few recipes in this chapter:
We then need to create the corresponding variables in our
In order for the Overlay Blend effect to work, we will have to process each pixel from
each channel individually. To do this in a shader, we have to write a custom function
that will take in a single channel, for instance, the red channel, and perform the
Overlay operation. Enter the following code in the shader just below the variable
Finally, we need to update our
function to process each channel of our
textures to
perform the blending:
With the
code completed in the shader, our effect should be working. Save the shader
and return to the Unity editor to let the shader compile. Our script is already set up,
so we don't have to modify it any further. Once the shader compiles, you should see a
result similar to the following image:
How it works…
Our Overlay blend mode is de�nitely a lot more involved, but if you really break down the
function, you will notice that it is simply a multiply blend mode and screen blend mode. It's
just that, in this case, we are doing a conditional check to apply one or the other blend
to a pixel.
With this particular
Screen Effect, when the Overlay function receives a pixel, it checks to see
whether it is less than 0.5. If it is, then we apply a modi�ed multiply blend mode to that pixel;
if it's not, then we apply a modi�ed screen blend mode to the pixel. We do this for each pixel
for each channel, giving us the �nal RGB pixel values for our Screen effect.
As you can see, there are many things that can be done with screen effects. It really just
depends on the platform and amount of memory you have allocated for screen effects.
Usually, this is determined throughout the course of a game project, so have fun and get
creative with your screen effects.
Gameplay and Screen
When it comes to creating believable and immersive games, material design is not the only
aspect that we need to take into account. The overall feeling can be altered using screen
effects. This is very common in movies, for instance, when colors are corrected in the
post-production phase. You can implement these techniques in your games too, using the
knowledge from
Chapter 8
Screen Effects with Unity Render Texture
. Two interesting effects
are presented in this chapter; you can, however, adapt them to �t your needs and create your
very own screen effect.
In this chapter, you will learn the following recipes:
Creating an old movie screen effect
Creating a night vision screen effect
If you are reading this book, you are most likely a person who has played a game or two in
your time. One of the aspects of real-time games is the effect of immersing a player into a
world to make it feel as if they were actually playing in the real world. The more modern games
make heavy use of screen effects to achieve this immersion.
With screen effects, we can turn the mood of a certain environment from calm to scary, just by
changing the look of the screen. Imagine walking into a room that is contained within a level,
then the game takes over and goes into a cinematic moment. Many modern games will turn
on different screen effects to change the mood of the current moment. Understanding how to
create effects triggered by gameplay is next in our journey of shader writing.
In this chapter, we are going to take a look at some of the more common gameplay screen
effects. You are going to learn how to change the look of the game from normal to an old
movie effect, and we are going to take a look at how many �rst-person shooter games apply
their night vision effects to the screen. With each of these recipes, we are going to look at how
to hook these up to game events so that they are turned on and off as the game's current
Creating an old movie screen effect
Many games are
set in different times. Some take place in fantasy worlds or future sci-
� worlds, and some even take place in the old west, where �lm cameras were just being
developed and the movies that people watched were black and white or sometimes tinted
with what is called a sepia effect. The look is very distinct, and we are going to replicate this
look using a screen effect in Unity.
There are a few steps to achieve this look, and just to make the whole screen black and white
or grayscale, we need to break down this effect into its component parts. If we analyze some
reference footage of an old movie, we can begin to do this. Let's take a look at the following
image and break down the elements that make up the old movie look:
We constructed this image using a few reference images found online. It is always a good
idea to try and utilize Photoshop to construct images like this to aid you in creating a plan for
your new screen effect. Performing this process not only tells us the elements we will have
to code in, but it also gives us a quick way to see which blending modes work and how we
will construct the layers of our screen effect. The Photoshop �le we created for this recipe
is included in this book's support page at
Dust and scratches
: The third and �nal layer in our old movie screen effect is
and scratches. This layer will utilize two different tiled textures, one for scratches
and one for dust. The reason is that we will want to animate these two textures over
time at different tiling rates. This will give the effect that the �lm is moving along and
ollowing image
demonstrates this effect isolated to its own texture:
Let's get our screen effect system ready with the preceding textures. Perform the
following steps:
Gather up a vignette texture and dust and scratches texture, like the ones we
just saw.
Create a new script called
and a new shader called
With our new �les created, �ll in the code necessary to get the screen effect system
up and running. For references on how to do this, see
Chapter 8
Screen Effects with
Unity Render Textures
Finally, with our screen effect system up and running and our textures gathered, we can begin
the process of recreating this old �lm effect.
How to do it…
Our individual layers for our old �lm screen effect are quite simple, but when combined,
we get some very visually stunning effects. Let's run through how to construct the code for
our script and shader, then we can step through each line of code and learn why things are
working the way they are. At this point, you should have the screen effects system up and
running, as we will not be covering how to set this up in this recipe.
We will begin by entering the code in our script. Our �rst block of code that we will
enter will de�ne our variable that we want to expose to
in order to let the
user of this effect adjust it as they see �t. We can also use our mocked-up Photoshop
�le as a reference when deciding what we will need to expose to the
effect. Enter the following code in your effect script:
With our script complete, let's turn our attention to our shader �le. We need to create
the corresponding variables, which we created in our script in our shader. This will
allow the script and shader to communicate with one another. Enter the following
Now, we simply �ll in the guts of our
function so that we process the pixels
for our screen effect. To start with, let's get the render texture and vignette texture
passed to us by the script:
How it works…
Now, let's walk through each of the layers in this screen effect, break down why each of the
lines of code is working the way it is, and get more insight as to how we can add more to this
screen effect.
Now that our old �lm screen effect is working, let's step through the lines of code in our
function as all the other code should be pretty self-explanatory at this point in the book.
Just like our Photoshop layers, our shader is processing each layer and then compositing them
together, so while we
go through each layer, try to imagine how the layers in Photoshop work.
Keeping this concept in mind always helps when developing new screen effects.
Here, we have the �rst set of lines of code in our
The following code snippet illustrates the second set of lines of code in our
These lines of code are almost exactly like the previous lines of code in which we need to
generate unique animated UV values to modify the position of our screen effect layers. We
simply use the built-in
value to get a value between -1 and 1, multiply it by our
random value, and then by another multiplier to adjust the overall speed of the animation.
Once these UV values
are generated, we can then sample our dust and scratches texture
using these new animated values.
Our next set of code handles the creation of the colorizing effect for our old �lm screen effect.
The following code snippet demonstrates these lines:
Creating a night vision screen effect
Our next
screen effect is de�nitely a more popular one. The night vision screen effect is
Call of Duty Modern Warfare
, and just about any �rst-person shooter out in
the market today. It is the effect of brightening the whole image using that very distinct lime
green color.
In order to achieve our night vision effect, we need to break down our effect using Photoshop.
It is a simple process of �nding some reference images online and composing a layered image
to see what kind of blending modes you will need or in which order we will need to combine
our layers. The following image shows the result of performing just this process in Photoshop:
Let's begin to break
down our rough Photoshop composite image into its component parts
so that we can better understand the assets we will have to gather. In the next recipe, we will
cover the process of doing this.
Scan lines
: To increase the effect of this being a new type of display for the player, we
scan lines over the top of our tinted layer. For this, we will use a texture created
in Photoshop and let the user tile it so that the scan lines can be bigger or smaller.
next layer is a simple noise texture that we tile over the tinted image and
scan lines to break up the image and add even more detail to our effect. This layer
simply emphasizes that digital read-out look:
Let's create a screen effect system by gathering our textures. Perform the following steps:
Gather up a vignette texture, noise texture, and scan line texture, like the ones
we just saw.
Create a new script called
and a new shader called
With our new �les created, �ll in the code necessary to get the screen effect system
up and running. For instructions on how to do this, refer to
Chapter 8
Screen Effects
with Unity Render Textures
Finally, with our screen effect system up and running and our textures gathered, we can begin
the process of recreating this old �lm effect.
How to do it…
With all of our assets gathered and screen effect system running smoothly, let's begin to
add the code necessary to both the script and shader. We will begin our coding with the
script, so double-click on this �le now to open it in MonoDevelop.
We need to create a few variables that will allow the user of this effect to adjust it in the
. Enter the following code in the
public float brightness = 1.0f;
To make sure that we are passing the data from our
block to our
block, we need to make sure to declare them with the same name
We can now
concentrate on the meat of our
shader. Let's
start this by entering the code that is necessary to get the render texture and vignette
texture. Enter the following code in the
When you have �nished entering the code, return to the Unity editor to let the script and
shader compile. If there are no errors, hit play in the editor to see the results. You should see
something similar to the following image:
How it works…
The night vision effect is actually very similar to the old �lm screen effect, which shows us just
how modular we can make these components. Just by simply swapping the textures that we
are using for overlays and changing the speed at which our tiling rates are being calculated,
we can achieve very different results using the same code.
The only difference with this effect is the fact that we are including a lens distortion to our screen
effect. So let's break this down so that we can get a better understanding of how it works.
The following code
snippet illustrates the code used in processing our lens distortion. It is a
snippet of code provided to us by the makers of SynthEyes, and the code is freely available to
use in your own effects:
In this chapter, you will learn the following recipes:
Using CgInclude �les that are built into Unity
Making your shader world modular with CgInclude
Implementing a Fur Shader
Implementing heatmaps with arrays
This �nal chapter covers some advanced shader techniques that you can use for your game.
You should remember that many of the most eye-catching effects you can see in games are
made by testing the limit of what shaders can do. This book provides you with the technical
basis to modify and create shaders, but you are strongly encouraged to play and experiment
with them as much as you can. Making a good game is not a quest for photorealism; you
should not approach shaders with the intention of replicating reality because this is unlikely to
happen. Instead, you should try to use shaders as a tool to make your game truly unique. With
the knowledge of this �nal chapter, you will be able to create the materials that you want.
Using CgInclude �les that are built
Our �rst step
in writing our own CgInclude �les is to understand what Unity is already providing
us with for shaders. By writing Surface Shaders, there is a lot happening under the hood,
which makes the process of writing Surface Shaders so ef�cient. We can see this code in the
included CgInclude �les found in your Unity install folder at
All the �les contained within this folder do their part to render our objects with our shaders to
the screen. Some of these �les take care of shadows and lighting, some take care of helper
functions, and some manage platform dependencies. Without them, our shader writing
experience would be much more laborious.
You can �nd a list of
information that Unity has provided us with at the following link:
Let's begin the process of understanding these built-in CgInclude �les, using some of the built-
in helper functions from the
You should have a simple scene set up to work on the shader. Refer to the following
screenshot as an example:
How to do it…
With the scene prepared, we can now begin the process of experimenting with some of the
that was
created for this scene in order to open it in MonoDevelop and insert the code given
in the following steps:
Add the following code to the
block of the new shader �le. We will need
a single texture and slide for our example shader:
We then need to make sure that we create the data connection between our
blocks, with the following code placed after the
Finally, we just have to update our
function to include the following code.
We introduce a new function that we haven't seen yet, which is built into Unity's
With the shader code modi�ed, you should see something similar to the following screenshot.
We have simply used a helper function, built into Unity's CgInclude �le, to give us an effect of
desaturating the main texture of our shader:
How it works…
, we are able to quickly get a
desaturation or grayscale effect on our shaders. This is all possible because the
�le is brought automatically to our shader as we are using a Surface shader.
If you search
through the
�le, opened in MonoDevelop, you will �nd the
implementation of this function at line 276. The following snippet is taken from the �le:
How to do it…
With our CgInclude �le open, we can begin to enter the code that will get it working with
our Surface Shaders. The following code will get our CgInclude �le ready for use within our
Surface Shaders and allow us to continually add more code to it as we develop more shaders:
We begin our CgInclude �le with what is called a preprocessor directive. These are
statements such as
. In this case, we want to de�ne a
new set of code that will be executed if our shader includes this �le in its compiler
directives. Enter the following code at the top of your CgInclude �le:
We always need to make sure that we close
close the de�nition check, just like an
statement needs to be closed with two
brackets in C#. Enter the following code just after the
At this point, we just need to �ll in the guts of the CgInclude �le. So we �nish off our
CgInclude �le by entering the following code:
lightDir, fixed atten)
fixed c;
c.rgb = s.Albedo * _LightColor0.rgb * ((diff * _MyColor.rgb) *
c.a = s.Alpha;
When we return to Unity, the shader and CgInclude �le will compile, and if you do not
see any errors, you will notice that in fact we are using our new Half Lambert lighting
model and a new color swatch appears in our material's
. The following
screenshot shows the result of using our CgInclude �le:
How it works…
When using shaders, we can include other sets of code using the
directive. This tells Unity that we want to let the current shader use the code from within the
included �le in the shader; this is the reason why these �les are called CgInclude �les. We are
including snippets of Cg code using the
Once we
directive and
Unity is able to �nd the �le in the project, Unity
will then look for code snippets that have been de�ned. This is where we start to use the
directives. When we declare the
directive, we are simply
if not de�ned, de�ne something with a name
. In this recipe's case, we said we wanted
. So if Unity doesn't �nd a de�nition called
goes and creates it when the CgInclude �le is compiled, thereby giving us access to the code
that follows. The
method simply says that this is the end of this de�nition, so stop
looking for more code.
You can now see how powerful this becomes as we can now store all of our lighting models
and custom variables in one �le and greatly reduce the amount of code that we have to write.
real power is when you can begin to
give your shaders the �exibility by de�ning multiple
states of functions in the CgInclude �les.
material depends on its physical structure. The
shaders attempt to simulate
them, but in doing so, they oversimplify the way light behaves. Materials with a complex
macroscopic structure are particularly hard to render. This is the case for many textile
fabrics and animal furs. This recipe will show you how it is possible to simulate fur and other
materials (such as grass) that are more than just a �at surface. In order to do this, the same
material is drawn multiple times over and over, increasing its size every time. This creates the
illusion of fur.
The shader presented here is based on the work of
Jonathan Czeck
Aras Pranckevičius
The white pixels in
the control mask will be extruded from the original material, simulating
a fur. It is important that the distribution of these white pixels is sparse in order to give an
illusion that the material is made out of many small hair strands. A loose way to create such a
texture is as follows:
threshold to your original texture to better capture patches where the fur is
Apply a noise �lter that pixelates the image. The RGB channels of noise must not be
dependent in order to produce a black and white result.
For a more realistic look, overlay a Perlin noise �lter that adds to the variability of
the fur.
Finally, apply a threshold �lter again to better separate the pixels in your texture.
Like all the other shaders before, you will need to create a new standard shader and material
to host it.
How to do it…
For this recipe, we can start modifying a
Standard shader
Add the following
This shader requires you to repeat the same pass several times. We will use the
technique introduced in the
Making your shader world modular with CgIncludes
section to group all the code necessary from a single pass in an external �le. Let's
start creating a new CgInclude
with the following code:
uniform fixed _GravityStrength;
+ v.normal * (1-_GravityStrength), FUR_MULTIPLIER);
v.vertex.xyz += direction * _FurLength * FUR_MULTIPLIER *
float2 uv_MainTex;
float3 viewDir;
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
Once the shader is compiled and attached to a material, you can change its appearance from
property determines the space between the fur shells, which
will be altering the length of the fur. A longer fur might require more passes to look realistic.
are used to control the density of the fur and
how it gets progressively thinner.
determines the �nal transparency of the fur,
resulting in a fuzzier look. Softer materials should have a high
. Finally,
curve the fur shells to simulate the effect of gravity.
How it works…
The technique presented in this recipe is known as Lengyel's concentric fur shell technique or,
simply, shell technique. It works by creating progressively bigger copies of the geometry that
needs to be rendered. With the right transparency, it gives the illusion of a continuous thread
The shell
technique is extremely versatile and relatively easy to implement. Realistic, real fur
requires not only extruding the geometry of the model, but also
altering its vertices. This is
possible with tessellation shaders, which are much more advanced and not covered in this book.
Each pass in this Fur Shader is contained in
. The vertex function creates
a slightly bigger version of the model, which is based on the principle of normal extrusion.
Additionally, the effect of gravity is taken into account so that it gets more intense the further
we are from the centre:
In this example, the alpha channel is used to determine the �nal length of the fur. This allows
for a more precise control.
Finally, the surface function reads the control mask from the alpha channel. It uses the cutoff
value to determine which pixels to show and which ones to hide. This value changes from the
�rst to the �nal fur shell to match
The �nal alpha value of the fur also depends on its angle from the camera, giving it a
softer look.
There's more…
The Fur
Shader has been used to simulate fur. However, it can be
used for a variety of other
materials. It works very well for materials that are naturally made of multiple layers, such as
forest canopies, fuzzy clouds, human hair, and even grass.
There are many other improvements that can dramatically increase its realism. You can add
a very simple wind animation by changing the direction of the gravity depending on the
current time. If calibrated correctly, this can give the impression that the fur is moving
Additionally, you can make your fur move when the character is moving. All these little tweaks
contribute to the believability of your fur, giving the illusion that it is not just a static material
drawn on the surface. Unfortunately, this shader comes at a price: 20 passes are very heavy
to compute. The number of passes roughly determines how believable the material is. You
should play with fur length and passes in order to get the effect that works best for you. Given
the performance impact of this shader, it is advisable to have several materials with different
numbers of passes; you can use them at different distances and save a lot of computation.
Implementing heatmaps with arrays
characteristic that makes shaders hard to master is the
lack of a proper documentation.
Most developers learn shaders by messing up with the code, without having a deep
knowledge of what's going on. The problem is ampli�ed by the fact that Cg/HLSL makes a
lot of assumptions, some of which are not properly advertised. Unity3D allows C# scripts to
communicate with shaders using methods such as SetFloat, SetInt, SetVector, and so on.
Unfortunately, Unity3D doesn't have a SetArray method, which led many developers to believe
that Cg/HLSL doesn't support arrays either. This is not true. This post will show you how it's
possible to pass arrays to shaders. Just remember that GPUs are highly optimized for parallel
computations, and using for loops in a shader will dramatically drop its performance.
For this recipe, we will implement a heatmap, as shown in the following image:
How to do it…
This shader is quite different from the ones created before, yet it is relatively short. For this
reason, the entire code is provided in the following points:
Copy this
code to the newly created
half ri = _Properties[i].x;
half hi = 1 - saturate(di / ri);
h += hi * _Properties[i].y;
// Converts (0-1) according to the heat texture
h = saturate(h);
If your heatmap is going to be used as an overlay, then make sure
that the ramp texture has an alpha channel and the texture is
imported with the option,
Alpha is Transparency
Create a new script called
using the following code:

How it works…
This shader relies on things that have never been introduced before in this book; the �rst one
is arrays. Cg allows arrays that can be created with the following syntax:
Cg doesn't support arrays with an unknown size: you must preallocate all the space that you
need beforehand. The preceding line of code creates an array of 20 elements.
Unity does not
expose any method to initialize these arrays
directly. However, single elements
are accessible using the name of the array (
) followed by the position, such as
. This currently works only for certain types of arrays, such as
. The script attached to the quad initializes the shader's arrays, element
by element.
In the fragment function of the shader, there is a similar for loop that, for each pixel of the
material, queries all the points to �nd their contribution to the heatmap:
) half ri = _Properties[i].x; half hi = 1 - saturate(di / ri); h += hi * _Properties[i].y;}The h variable stores the heat from all the points, given their radii and intensities. It is then
used to look up which color to use from the ramp texture.
The shaders and
arrays are a winning combination, especially as very few games are using
them at their full potential. However, they introduce a
signi�cance bottleneck as for each pixel,
the shader has to loop through all the points.
Water Shader, implementing for 127-131
2D texture
3D surface
URL 31
URL 87
creating 72-77
heatmaps, implementing with 212-216
Fur Shader
implementing 207-211
implementing 125-127
about 117
URL 158
unit (GPU) 6, 80, 105, 121
implementing, with arrays 212-216
individual time values
Inspector GUI Name 13
insulators 82
Lambertian re�ectance 44, 56, 60
Legacy Shaders
automatic upgrade option 8
custom shaders, migrating 9, 10
migrating, from Unity 4 to Unity 5 7
Standard Shaders, using 8, 9
con�guring 91, 92
URL 94
baking 93, 94
baking, in scene 90, 91
light probes, con�guring 91, 92
static geometry, con�guring 91
light transport 90
masking 27
material chart
materials 2
Maya 96
URL 157
Oren-Nayar lighting model
Overlay Blend mode
with screen effects 173-176
packed arrays
URL 27
packed matrices 27
Passion Pictures 115
PBR Texture Conversion
Perlin noise 111
Photoshop 40, 111
physically-based rendering (PBR)
transparency, adding 83, 84
Pixel Shaders 120
post effects 151
URL 146
using 141-146
adding, to shader 10-14
URL 14
using, in Surface Shader 14-19
Pyro Technix
URL 115
ramp map 61
re�ective surfaces
creating 87, 88
renderers 2
RGB channels 40
lights, baking 90, 91
screen effects
blend modes, basic Photoshop like 167-172
brightness 161-166
contrast 161-166
overlay Blend mode with 173-176
saturation 161-166
script system, setting up 152-160
scripts 2
about 1, 95
Fur Shader, implementing 207-211
making modular, CgInclude used 203-206
modifying, for mobile 146-150
pro�ling 140-146
properties, adding 10-14
textures, adding 28-31
URL 10
URL 87
skinned mesh renderers 94
skyboxes 89
smearing 27
snow shader
geometry, altering 110
implementing 107-109
surface, coloring 109
standard 80
StandardDiffuse 5
Standard Shaders
surface output 22
SurfaceOutputStandardSpecular struct
properties 25
SurfaceOutputStandard struct
properties 25
SurfaceOutput struct
properties 24
Surface Shader
properties, using 14-19
vertex color, accessing 96-99
vertices, animating 100-103
working 22
circle, creating around 51-53
texture GUI element 11
texture mapping 28
adding, to shader 28-31
blending 46-51
packing 46-51
scrolling, by modifying UV values 32-34
Toon Shader
adding, to PBR 83, 84
objects, fading 85
semi-transparent materials 84, 85
solid geometries, with holes 86
transparent material
Type 13
documentation, URL 81
Unity 4
to Unity 5, Legacy Shaders migrating from 7
seen in the past. Our unique business model allows us to bring you more focused information,
Over 100 recipes exploring the new and exciting
Rendering Essentials
Learn the principles of lighting and rendering in
Unreal Engine
Once we have the luminance values, we can simply add the color we want to tint the image
with. This color is passed from our script to our shader, then to our
where we can add it to our grayscale render texture. Once completed, we will have a perfectly
tinted image.
Finally, we create the blending between each of our layers in our screen effect. The following
code snippet shows the set of code we are looking at:

Приложенные файлы

  • pdf 11178662
    Размер файла: 5 MB Загрузок: 0

Добавить комментарий