Interactive Function

I asked my lecturers for help with the interactive function of my project. I had originally found a MaxMSP patch, video triggers video by Zach Poff . This patch detects motion via live video feed, which triggers and outputs a video. This patch also allowed users to draw specific areas to detect movement on the live video feed instead of the entire area. My lecturers pointed out that as it had been made with a previous version of Max, it no longer functioned correctly and it would be easier to make it from scratch.

They advised the easiest way to create the patch would be to first use an example of motion detection. Use this example to output a video to Madmapper via Syphon. Then to put the animations I want to project into one video at different points. Then to track movement in specific coordinates; which will move the timehead of the video to various points depending on which model has been interacted with. So far I have used the motion tracking example and outputted the triggered video to MadMapper via Syphon.

Design

I have also continued testing the design aspect of my project. I isolated the buildings from the aerial view image and mapped them to the model buildings I made. I then put the rest of the image underneath. This was intended to have precisely mapped images. However, you can still see the buildings underneath.

13022453_10154172225874365_1198851097_n.jpg13059521_10154172139679365_1457729571_n.jpg

I still wanted to see if contrasting styles of realistic and infographic projections worked together. I created simple vector images on Illustrator and animated them with After Effects. After projecting the images onto the model, I am not entirely happy with the outcome. Personally, I think the styles clash and the real map image overlaps the models in some areas. To develop the design I will research into further ways of achieving the desired look. I also intend to draw one of the models entirely in vector graphics to see what the overall effect is.

Advertisements