Run A Massive Crowd Simulation

simpleboxcrowdsimulation.tiff
"A year spent in artificial intelligence is enough to make one believe in God."
--
Alan Perlis

Last month, I
used Massive for the first time. Due to time constraints, I had not been able to get to explore it a whole lot more, but I've been watching and reading quite a few tutorials. This allowed me to be able to get a few little things out of the way fairly quickly, and I was able to set this up in a decent amount of time.

Last time, I was able to model and place a simple car agent. This time, I made the agent even more simple (just a box) and just focused on another element of an agent's design: its brain. A brain in Massive is basically a set of logistical rules that governs how the agent will interact with its environment as well as the other agents in the scene.

In today's case, I decided that by default, the agent would travel forward in its Z-axis at a set speed. If there is another agent directly in front of it, it slows down. If an agent is in the front and to the right of it, it turns left to avoid it, and if an agent is in front and to the left, it turns right to avoid it.

The setup for the brain rules uses fuzzy logic - reasoning that is approximate rather than fixed and absolute. Massive uses nodes that correspond to different stages of the evaluation of these fuzzy logic rulesets. There are eight different brain nodes:
  • input - These can be a fixed number or can relate to a particular element of the agent. For my example today, I used my agent's vision attributes, which made my setup pretty intuitive to understand, but it isn't necessarily the best way to do this; I'll talk about that a little later.
  • timer - These nodes are used to set things to happen at particular intervals. I didn't use them today.
  • noise - If you want an oscillating pseudorandom but continuous value, that's what this node gives you. It's great for testing isolated agent behavior.
  • fuzz - A fuzz node is used to set the fuzzy logic parameters for your chosen input. Typically, this gives you something in the range of 0 to 1 or -1 to 1.
  • and/or - These nodes give you the rules for your logic. In our example, we're combining whether something is close to and in front of, left, or right of our agents.
  • defuzz - This node defines what we want to do with our output. If we want to add a multiplier to it or something, we can do that here. Also, a defuzz node can just be connected to an output and used as the default value when none of the other rules apply to it.
  • output - The output node is what the final value of the node ends up being. This tells the agent where to apply this value. For example, a tz output node tells the agent to translate this many units on the Z-axis direction.

Like I said, I used the agent's vision attributes as the input values of the brain. This tells Massive to make a map of what every agent can "see". Because my agents were just boxes, this wasn't too bad, but for a large scene with many agents with much more complexity, this can be computationally expensive. One way to get around this is to use the vision in only a limited capacity or to use one of the other agent senses like sound or agent fields instead.

Here is an example of what a simple vision map looks like:
Pasted Graphic 6

An agent's vision.x refers to the direction another agent is around it, and it's values vary from -1 to 1, and this corresponds to a set view (the default is 180 degrees from -90 to 90). On the other hand, vision.z refers to distance in front of the agent from 0 to 1.

For vision.z, I set a fuzz node to find when another agent is near. If it's close and directly in front, slow down. If it isn't, move a specified normal speed in the Z direction.
Pasted Graphic 9

I used vision.x to find whether another agent was left, right or center of the agent and then rotate if it was near and on one of the sides. This is what the agent brain looked like when I ran the sim.
Pasted Graphic 10

When I set up the fuzz nodes, I can indicate how the values are adjusted over the range of the input. A lot of the time they look like smooth bell curves. The three curves below correspond to the left, center, and right fuzz values of the vision.x input. The cool thing about this example is that it also shows the vision map underneath.
Pasted Graphic 11

Here's what the quick simulation looked like when I was done.


I really enjoyed doing this and like I mentioned in my post about Massive, it's something I've been interested in for a while. I am definitely going to pursue this further. I'm thinking about setting up a bunch of tutorials online as I'm learning. Whether I get to actually do this kind of work while I'm at ILM or not, I'm having fun exploring this element of computer graphics.
blog comments powered by Disqus