Creative Coder - Mirada Studios
Nov 2013 - October 2014

I worked as a programmer on several transmedia projects with Mirada Studios in Marina Del Rey, CA. For these projects, I designed and visualized data for large-scale installations. Between projects, I worked as a pipeline engineer, helping develop tools in Python to facilitate the production pipeline for other projects, such as commercials and film.
IBM
This project was a large-scale installation in IBM headquarters in New York, spanning the entire first floor of the building. My specific role pertained to the lobby animation: a 1920 x 4320 resolution, dynamically-generated visualization installation in the lobby of IBM.
Working in conjunction with another programmer, I communicated with the creative director to design, program, and iterate on the visualization, a creative code homage to Watson, in Cinder, as well as to update assets and documentation as client design needs changed. Together, we worked to come up with an animation that captured the essence of Watson, as it is given a question, answers it, and learns from that answer.
My other role on this project was to prepare the installation machine to be shipped to IBM, installed onsite, and finally to run for extended periods of time. To that end, I formatted the Linux server for installation in IBM’s lobby, installed the application, and set up the machine to auto-run the application. I also wrote the batch script to run and keep running the installation on a Windows machine.
Tools: Cinder, StayUp, SFML
OS: OSX, Windows, Linux
Working in conjunction with another programmer, I communicated with the creative director to design, program, and iterate on the visualization, a creative code homage to Watson, in Cinder, as well as to update assets and documentation as client design needs changed. Together, we worked to come up with an animation that captured the essence of Watson, as it is given a question, answers it, and learns from that answer.
My other role on this project was to prepare the installation machine to be shipped to IBM, installed onsite, and finally to run for extended periods of time. To that end, I formatted the Linux server for installation in IBM’s lobby, installed the application, and set up the machine to auto-run the application. I also wrote the batch script to run and keep running the installation on a Windows machine.
Tools: Cinder, StayUp, SFML
OS: OSX, Windows, Linux
Visa
This was a multiscreen installation project for Visa's headquarters consisting of high resolution video, a portion of which visualized real-time Divvy bike transactions as well as Visa credit card transactions over varying periods of time, in Myanmar, Chicago, and Mexico City.
I created data visualization prototypes of Divvy bike rental transactions over a period of six weeks in Chicago, seven months of credit card transaction data in Mexico City, and thirteen months of credit card transaction data in Myanmar. Each transaction data point consisted of at minimum a date and location; sometimes further time information was available as well. Based on my prototypes, our team animators created visually-appealing animations that stayed true to the data's story.
Tools: Cinder, Nominatim/OpenStreetMap, Google Refine, D3.js
OS: OSX
I created data visualization prototypes of Divvy bike rental transactions over a period of six weeks in Chicago, seven months of credit card transaction data in Mexico City, and thirteen months of credit card transaction data in Myanmar. Each transaction data point consisted of at minimum a date and location; sometimes further time information was available as well. Based on my prototypes, our team animators created visually-appealing animations that stayed true to the data's story.
Tools: Cinder, Nominatim/OpenStreetMap, Google Refine, D3.js
OS: OSX
Unnamed Financial Company
This project consisted of a multiscreen, floor-to-ceiling installation depicting daily and historic financial activity, and the project was conducted in conjunction with an intermediate design firm and the clients themselves.
TL;DR:
My main responsibilities were downloading, interpreting, and visualizing real-time and historical data, as well as generating unique, dynamic, and meaningful visualizations and animations of this data to be updated in real-time. I designed and programmed data visualization prototypes, which I then delivered to the client and fellow creatives to discuss the results and iterate.
Project Summary:
The team at Mirada, in addition to myself, initially consisted of several designers and the lead developer. At this stage, the lead developer and I discussed which technology would best suit the design direction, and how best to manage the project assets, as the lead programmer was working from a remote location. We also discussed logistics for code reviews, code format, documentation, and daily updates. In addition, all team members, myself included, participated in design iterations for each of a number of different data "stories" to be presented on the screens.
After initial designs were decided, the team was narrowed down to two designers and two programmers. The lead and I began work on code experiments/sketches and functionality prototypes for the data story concepts, written in a variety of programming languages, from Cinder to Processing to D3.js. We did rigorous testing of the client's proprietary software and API, which would be used to pull the data in real-time, and also began coding rough design prototypes using real data.
In the third stage, I was the only team member. I coded all data, animation, and design prototypes, formatted and interpreted new data to fit current designs, communicated with the producers to build a schedule, interfaced directly with both Mirada's creative director and the clients to iterate on prototypes and improve the design, as well as to troubleshoot technical issues with the client's proprietary software and API, and experimented with new design directions as new types of data were discovered.
Tools: Cinder, Processing, Bullet, Client Proprietary Software
OS: OSX, Windows
TL;DR:
My main responsibilities were downloading, interpreting, and visualizing real-time and historical data, as well as generating unique, dynamic, and meaningful visualizations and animations of this data to be updated in real-time. I designed and programmed data visualization prototypes, which I then delivered to the client and fellow creatives to discuss the results and iterate.
Project Summary:
The team at Mirada, in addition to myself, initially consisted of several designers and the lead developer. At this stage, the lead developer and I discussed which technology would best suit the design direction, and how best to manage the project assets, as the lead programmer was working from a remote location. We also discussed logistics for code reviews, code format, documentation, and daily updates. In addition, all team members, myself included, participated in design iterations for each of a number of different data "stories" to be presented on the screens.
After initial designs were decided, the team was narrowed down to two designers and two programmers. The lead and I began work on code experiments/sketches and functionality prototypes for the data story concepts, written in a variety of programming languages, from Cinder to Processing to D3.js. We did rigorous testing of the client's proprietary software and API, which would be used to pull the data in real-time, and also began coding rough design prototypes using real data.
In the third stage, I was the only team member. I coded all data, animation, and design prototypes, formatted and interpreted new data to fit current designs, communicated with the producers to build a schedule, interfaced directly with both Mirada's creative director and the clients to iterate on prototypes and improve the design, as well as to troubleshoot technical issues with the client's proprietary software and API, and experimented with new design directions as new types of data were discovered.
Tools: Cinder, Processing, Bullet, Client Proprietary Software
OS: OSX, Windows
Internship with Side Effects Software
Jan 2013 - Sep 2013

I completed an internship at Side Effects Software in Santa Monica, as a Houdini Gaming Intern. I worked on R&D for 3D gaming applications using Houdini, as well as the creation of a first-person shooter demo level for Side Effects Software's Houdini Engine. This software allows Houdini Digital Assets to be imported into other software, such as Unity or Maya, while remaining fully interactive. In addition, I prototyped early versions of some of the features of Houdini Engine for Unity. These included user interface prototyping and a "script attach" feature, which attached Unity components and scripts to imported digital assets, based on the assets' attributes.
I constructed several simple Houdini digital assets with explanations in order to introduce Houdini to new users of all backgrounds. You can read the details of this work on my blog.
I constructed several simple Houdini digital assets with explanations in order to introduce Houdini to new users of all backgrounds. You can read the details of this work on my blog.
PYRO SPRITE
I created real-time game effects using both Houdini's pyro solver and in-engine particle system and rendering capabilities. You can see these in action in the Houdini Engine game demo for Unity 3D.
Pyro Sprite Demo from Bhavna Mahadevan on Vimeo. |
I also wrote an OTL in Python to integrate Perforce with Houdini, bringing version control into the Houdini interface, itself, and I managed the Perforce setup for the demo level project. Footage of my work is available below.
Tools: Houdini, Houdini Engine, Unity3D, Python, Perforce
OS: Windows
Tools: Houdini, Houdini Engine, Unity3D, Python, Perforce
OS: Windows
GAME DEMO
Features scripted, over the course of 3 months:
|
Internship with DARPA Engage: Beanstalk Summer 2012
I just finished a summer internship with DARPA, where I worked on an educational game originally created during Spring 2012 at the ETC, as part of team Sci Fri's semester project. For this summer, I was lead programmer and extended the existing game, Beanstalk, by incorporating a socio-emotional framework of, "persistence, ask for help, cooperate, and discuss." To this end, I added a multiple actor framework, where the actors are within-game NPCs (non-playable characters) who are introduced in the narrative as agents that the child player can get encouragement from, ask for help, cooperate on solutions, and reflect on/discuss the problem.
This was my first time programming NPC behavior (and my first time working with educational games). As I was working with a pre-existing code base, I learned a lot of best practices for Unity coding, in addition to game design concepts specific to educational games. In addition, I gained experience with a client who is highly involved with the design of the game (weekly meetings, regular email correspondence, immediate feedback, etc.).
You can see the current version of Beanstalk here.
Tools: Unity3D
OS: Windows
This was my first time programming NPC behavior (and my first time working with educational games). As I was working with a pre-existing code base, I learned a lot of best practices for Unity coding, in addition to game design concepts specific to educational games. In addition, I gained experience with a client who is highly involved with the design of the game (weekly meetings, regular email correspondence, immediate feedback, etc.).
You can see the current version of Beanstalk here.
Tools: Unity3D
OS: Windows
ETC Project: Eight Songs for a Mad King
Spring 2012
In Spring 2012, I had the privilege of spending my first ETC project semester (and second semester overall of my graduate program) in Manchester, UK, working on a location-based entertainment project in conjunction with the University of Salford. Our project was an interactive visualization of the monodrama, "Eight Songs for a Mad King." This 30-minute piece was composed by Sir Peter Maxwell Davies, the current Master of the Queen's Music, and for the purposes of our project, was performed by Psappha, a local ensemble. I was lead programmer on this project, and programmed three interactions (below) which used the Microsoft Kinect to immerse the guest in the overall experience. These interactions were displayed on one of five screens in a space known as "the Egg." We used all five screens in our project. I also contributed to the interaction design, and networked the five screens to start, play, change tracks, and end in sync, as our ultimate goal for the project was to put on a performance.
I came into this project with no Unity3D or C# experience, so I had to learn both of these really fast and really well. Figuring out how to sync and network the screens across various software and machines was also challenging, but fun! If I'd had more time on this project, I would have wanted to find a quicker way to sync the screens; my final design required about five minutes of setup prior to each showing of the project.
Tools: Unity3D, Microsoft Kinect SDK, C#, VLC Media Player
OS: Windows
Screen resolutions, video of the space, and our project in action can be seen below:
I came into this project with no Unity3D or C# experience, so I had to learn both of these really fast and really well. Figuring out how to sync and network the screens across various software and machines was also challenging, but fun! If I'd had more time on this project, I would have wanted to find a quicker way to sync the screens; my final design required about five minutes of setup prior to each showing of the project.
Tools: Unity3D, Microsoft Kinect SDK, C#, VLC Media Player
OS: Windows
Screen resolutions, video of the space, and our project in action can be seen below:
Short videos of each interaction are below. They are as follows:
1) Track 1 Interaction – The Sentry: In this interaction, feathers fall from the ceiling and the guests are able
to wave their hands to interact with the feathers as they fall.
2) Track 2 Interaction – The Country Walk: In this interaction, the guests can touch three of the crows on the tree,
and see the crows react to the contact.
3) Track 5 Interaction – The Phantom Queen: In this interaction, the guests wave their arms to light up the stars in
the night sky.
1) Track 1 Interaction – The Sentry: In this interaction, feathers fall from the ceiling and the guests are able
to wave their hands to interact with the feathers as they fall.
2) Track 2 Interaction – The Country Walk: In this interaction, the guests can touch three of the crows on the tree,
and see the crows react to the contact.
3) Track 5 Interaction – The Phantom Queen: In this interaction, the guests wave their arms to light up the stars in
the night sky.
Sin Glasses: DVFX - Georgia Tech
Fall 2010
This was a free experiment for my DVFX class during my undergrad, in which I implemented a sine function. The amplitude of the sine function is varied from 0 to 50, and varies by frame rather than by pixel. As the amplitude of the sin function oscillates, the waviness of the image either increases or decreases.
Tools: Processing, Flip Camera
OS: OSX
Code Snippet: here
Tools: Processing, Flip Camera
OS: OSX
Code Snippet: here
Horizontal Separable Filter: DVFX - Georgia Tech
Fall 2010
This was an assignment for my DVFX class during my undergrad, that detects the horizontal edges of an image. This works by using the Sobel operator for the row and column passes. The 3 x 1 matrix is used for the row pass, and the 1 x 3 matrix is used for the column pass. The operator calculates the vertical gradient (i.e. from top to bottom) of the image intensity at the source pixel. This shows whether the image changes abruptly at that pixel and thus whether that part of the image is likely to be an edge.
Tools: Processing, Flip Camera
OS: OSX
Code Snippet: here
Tools: Processing, Flip Camera
OS: OSX
Code Snippet: here
Simulating Swimming Creatures: SURE Research
Summer 2010

Advised by: Dr. Greg Turk & Dr. Chris Wojtan in the Computer Graphics Group, Georgia Tech
The applet created for the project can be found here, and the project files are here.
The goal of this NSF-funded summer research project was to realistically simulate the motion of swimming creatures in an underwater environment. To achieve this goal, we used angular springs in addition to linear springs to provide a large range of realistic motion caused by the lifelike movement of a simple model of a swimming creature (for example, a swinging tail or contracting tentacles). This model was represented by several mass points, which were connected by linear springs, and advanced via numerical integration. These linear springs were moved by angular springs, and this coupled with the application of a drag force combined to simulate independent motion of a simple swimming creature in a fluid. I developed the underwater environment, which applies forces on the masses and facilitates the creatures' progress, in Processing, as well as the user interface for swimmer creation. The simulation of swimming creatures using a mass-spring system such as the one in this project can be adapted for use in the film industry and in the video game industry. Plausible simulation has biological applications as far as study of animal behavior, as well as mechanical applications in the construction of robots able to move in an underwater environment.
This project was my introduction to Computer Graphics research, and I found it to be quite enjoyable! I liked being able to apply physics and numerical methods while still creating something visually appealing and fun to play with. The project fulfilled both my technical side and my artistic side! Given more time for this research, I would have liked to experiment with other forces on the swimming creature, such as the motion of water or waves, or deep-sea pressure.
To read the writeup of this project, click here.
Tools: Processing
OS: OSX
The applet created for the project can be found here, and the project files are here.
The goal of this NSF-funded summer research project was to realistically simulate the motion of swimming creatures in an underwater environment. To achieve this goal, we used angular springs in addition to linear springs to provide a large range of realistic motion caused by the lifelike movement of a simple model of a swimming creature (for example, a swinging tail or contracting tentacles). This model was represented by several mass points, which were connected by linear springs, and advanced via numerical integration. These linear springs were moved by angular springs, and this coupled with the application of a drag force combined to simulate independent motion of a simple swimming creature in a fluid. I developed the underwater environment, which applies forces on the masses and facilitates the creatures' progress, in Processing, as well as the user interface for swimmer creation. The simulation of swimming creatures using a mass-spring system such as the one in this project can be adapted for use in the film industry and in the video game industry. Plausible simulation has biological applications as far as study of animal behavior, as well as mechanical applications in the construction of robots able to move in an underwater environment.
This project was my introduction to Computer Graphics research, and I found it to be quite enjoyable! I liked being able to apply physics and numerical methods while still creating something visually appealing and fun to play with. The project fulfilled both my technical side and my artistic side! Given more time for this research, I would have liked to experiment with other forces on the swimming creature, such as the motion of water or waves, or deep-sea pressure.
To read the writeup of this project, click here.
Tools: Processing
OS: OSX
Natsukashii: Production & Sound Design - ETC
Fall 2011
Produced and designed sound for a Kinect game; this game was built in two weeks for my Building Virtual Worlds class at the ETC-Global in Fall 2011. This game was played by a naive guest (as opposed to a student who already knows how to work the various platforms). I had a lot of fun producing for this game, as I had to design for a person with no prior Kinect knowledge, while still maintaining a visually interesting environment and intriguing story.
Tools: Adobe Audition, ProTools
OS: Windows
Tools: Adobe Audition, ProTools
OS: Windows
Secret World: Sound Design - ETC
Fall 2011
Sound design project for my Building Virtual Worlds class in Fall 2011 at the Entertainment Technology Center (Carnegie Mellon University). I added all sound effects and music (Never Enough by Dream Theater) to the trailer for The Secret World, an upcoming MMORPG created by Funcom. I loved starting from a completely silent track and slowly building the mood of the trailer as I added sounds. It was neat to see how other people in my class took the same trailer and created a completely different story, just by using different sounds!
Tools: Adobe Audition, Adobe Premiere
OS: Windows
Tools: Adobe Audition, Adobe Premiere
OS: Windows
Institute for Personal Robots in Education
2009-2010
Advised By: Dr. Jay Summet, Institute for Personal Robots in Education, Georgia Tech
The purpose of this research was to determine the effect (positive, negative, or none) of using Scribbler robots to teach introductory programming courses; that is, whether using the Scribbler robot facilitated or hindered learning. I first transcribed beginning-of-course and end-of-course survey data for introductory programming classes at three different schools (Georgia Institute of Technology, University of Tennessee Knoxville, and Georgia State University) that either did or did not use the Scribbler robot for the course. After transcription, I used Chi-Square analysis to determine significant differences between Robot and Non-robot sections in the answers to survey questions. Finally, under the supervision of Dr. Jay Summet, I accessed student records for Georgia Tech students in order to determine whether the student took further Computer Science courses, following a Robot or Non-robot introductory class.
Acknowledged in:
S. Markham and K. N. King, Using personal robots in CS1: experiences, outcomes, and attitudinal influences, Proceedings of the 15th Annual Conference on Innovation and Technology in Computer Science Education (June 2010).
The purpose of this research was to determine the effect (positive, negative, or none) of using Scribbler robots to teach introductory programming courses; that is, whether using the Scribbler robot facilitated or hindered learning. I first transcribed beginning-of-course and end-of-course survey data for introductory programming classes at three different schools (Georgia Institute of Technology, University of Tennessee Knoxville, and Georgia State University) that either did or did not use the Scribbler robot for the course. After transcription, I used Chi-Square analysis to determine significant differences between Robot and Non-robot sections in the answers to survey questions. Finally, under the supervision of Dr. Jay Summet, I accessed student records for Georgia Tech students in order to determine whether the student took further Computer Science courses, following a Robot or Non-robot introductory class.
Acknowledged in:
S. Markham and K. N. King, Using personal robots in CS1: experiences, outcomes, and attitudinal influences, Proceedings of the 15th Annual Conference on Innovation and Technology in Computer Science Education (June 2010).