Redefining how you control your devices, the Intel® RealSense™ 3D Camera enables new ways to interact in gaming, entertainment, and content creation. Featuring full 1080p color and a best-in-class depth sensor, the camera gives PCs and tablets 3D vision for new, immersive experiences. Interact more intuitively with facial analysis, hand and finger tracking, speech recognition, background subtraction and augmented reality, for agile device control.
Thu, September 4 2014 » FC2_2014 » No Comments » Author: softscan
Souce at http://realvision.ae/blog/2014/08/the-language-of-visual-storytelling-in-360-virtual-reality/
They say you shouldn’t hijack the head-tracking data stream of the Oculus Rift; Visuals should not be separated from the human vestibular system…but rules were meant to be broken. Why? because there’s so much more to VR than gaming.
This is not to say that games aren’t becoming movies! I found myself strangely immersed in Naughty Dog’s “Last of Us” than any tent-pole movie I’ve seen in the past few months. Such is the power of CG movies, un-canny valley be damned.
Defining a language for 360 look-around movies:
You know how it all began oh so long ago (OK, 4 years ago) when the language of film-making was being defined / re-written for S3D. Well, time to re-write again. Immersive 360 film-making is set to explode, with an audience of teens to mid forties – at least at the start, and telling stories in this medium is quite a different skill-set to master.
Citizen Kane, back in the day, although a 2D film, had given enough clues to modern 3D film-makers on how to effectively use the medium of S3D… but no one really had the patience to listen. Lighting, Depth of field and yes – even hijacking the head-tracking stream can work when creating movies on a 360 canvas.
When I started investigating this exciting medium a few months ago, alarm bells would go off when I asked on Oculus Rift / Game Engine forums about intercepting the head-tracking and orientation info of these devices, but that’s because so far it’s only games that have been designed for VR. It’s soon becoming evident that apart from the gimmicky interactive look-around voyeuristic possibilities offered by the medium, serious Directors and storytellers will look at retaining control of the “frame” if they are to be enticed to creating movies in Virtual Reality.
So what could a immersive 360 Director’s tool-box look like?
Lighting – With the temptation to look around a scene, a Director and VR DoP can use the age-old technique of spot-lighting areas of importance.
360 Positional Sound – Wait until Dolby Atmos gets interested – Chances are an Atmos SDK might already be in the works to create scound-scapes that can aid in directing an audience’s attention.
Depth of Field – The pet peeve of Steresoscopic 3D film-making, unless done correctly. This technique is worth exploring in an immersive 360 environment, to guide audience attention. At least it won’t be a lead-by-the-nose experience, as it’s sometimes abused by inexperienced DPs and Directors on 2D films.
Limiting the Horizontal FoV – There is no rule per se that every scene should feature full wrap-around 360 views of the scene for the audience to explore. The horizontal field of view can be restricted for certain shots. This is a creative call, and is what will contribute to the flavor of the overall movie experience being crafted by the film-maker.
Advanced Tools for Immersive 360 Storytelling:
The short demo scene above is from MAYA – a mixed media Motion Comic I’m working on for the Oculus Rift and other VR devices, including Cell phone VR, such as Google’s Cardboard, Durovis Dive and others that will allow almost any smartphone; andriod or iOS, to playback VR movies and experiences.
The demo clip is a straight video grab of a wearer viewing a “page” of the motion comic via the Oculus Rift. It features both, scene cuts and subtle interactivity.
Interactivity: In the first scene after the title, The girl stands at the window – that’s what the Director intends the audience to see, and the rest of the room has subdued lighting. That is… unless the wearer turns their head around, which triggers the bedlamps to increase in intensity.
Forced Cut: Hijacking the Head-tracker – The next scene shows the girl framed on the bed. This cut will happen, ir-respective of where the wearer of the Rift is looking. Yes, it is a forced Cut, and will put the scene bang center.
The important point to be aware of, is this – The same rules for S3D storytelling apply; mis-matched depth splicing should be avoided.
(image credit: RoadtoVR)
GreenScreening the Crew out:
This idea came to me when I glanced at the image of what I later realized was a paratrooper in the movie. I initially thought they had covered crew/equipment in green, for later keying/wire removal. While I have not looked at the actual feasibility of stereoscopically replacing a background plate after removing any green-screen clad crew or equipment – I am confident that it could be possible, even when dealing with de-warping and stitching the 360 image.
Compositing in 360:
Below is an interactive 360 “cubic” panorama. It was converted from an Equirectangular image to cubes that form the Panorama. (Click and drag, or if browsing this page on an android/ios device, the gyro will work).
Thu, August 7 2014 » FC2_2014 » No Comments » Author: softscan
Gravity is a pen and pad that allows to sketch in 3D space using augmented reality. The cool patent-pending system hardware and software system has gone through several working prototypes and now they are looking to start manufacturing
CHECK IT OUT AT
Wed, April 23 2014 » FC2_2014 » No Comments » Author: softscan
ReversedAIML is a program written in AIML (Artificial Intelligenge Markup Language) for creating AIML content. ReversedAIML, developed by Charlix, converts factual statements into AIML.
heres the link to youyube
Download from http://charlix.sourceforge.net/reversedaiml.html
AIML bots can access CYC the Ontology database
Sat, April 12 2014 » FC2_2014 » No Comments » Author: softscan
Here is the diagnosis of why 3D imports failed in our project
Aurasma Tech Team (Aurasma Community Network)
Apr 02 02:52
The issues I can see with this file are:
1.The tar file contains a folder, this is why Aurasma is outputting the Invalid archive contents message – it’s looking for a DAE file inside the tar and all it sees is a folder. All your files need to be in the Root of the tar archive, I think when creating a tar file on a mac OS it will automatically create a folder inside the tar – so don’t beat yourself up about it!
2. When the folder is removed you still have a problem with a missing texture – the texture file notexture.png has been applied to the mesh and isn’t in the tar archive. All textures you apply to the objects in your scene must be on your tar file.
3. The actual dae file you have is almost fine – the only problem is that your material is set to 0 transparency – meaning its totally invisible. So even had you put your notexture.png in there & the tar uploaded successfully, you still wouldn’t see anything when it triggered.
Also, checkout our 3D Guidelines for all the essentials http://www.aurasma.com/wp-content/uploads/Aurasma-3D-Guidelines.pdf
Sat, April 5 2014 » FC2_2014 » No Comments » Author: softscan
and general overview and examples
Thu, April 3 2014 » FC2_2014 » No Comments » Author: softscan
Here it is:
Wed, April 2 2014 » FC2_2014 » No Comments » Author: Raheem
Hi everyone – we will meet tomorrow morning at our regular class time just inside main doors at Nathan Phillips square. We’ll explore the first project and then head to Trinity Bellwood to explore the second. We have at least two cars so I *think* we should be able to drive over to the park together, no problem. Please come ready to experience the works – if you have a mobile device that will run aurasma, bring it!
The link to fordproject’s Channel is
Please post the second channel asap!
It promises to be a nice day – thank goodness. Looking forward to seeing you all, experiencing these projects and wrapping up our discussion.
Wed, April 2 2014 » FC2_2014 » No Comments » Author: Caitlin
Kline’s Bleeding Through (the book) follows the remembered life of Molly which touches on a sequence of historical events.
And her friend Edgar who is so possessed by the story of the Biblical Ezra.
Klien dreams and re-edits the events of his life. His memories.
He follows leads about dead old people referred to by living oldies
He identifies historical sites that have been obliterated by development
The history of forgetting is pleasure based on abstinence
The computer is essentially an aesthetic of databases assets
What gives story presence is absence
First The seven memories that Molly remembered before she almost died – via Photographs
Second Contextualisation – via press clippings
Third The aporia of Media – everything that Molly forgot, left out, couldn’t see – Movies and Video
Computers cant deliver this third act.
Memory and Thought
The other pole of memory is not forgetting but its absence.
In Buddhist “psychology” “feelings” of flow are dealt with. Feelings consist of thought plus the accompanying bodily sensations. All thought is conditioned by memory and mental habits. Attention is only fully present when. thinking and prejudgment don’t occur and perception of word events are uncolored. This is momentary “enlightenment” (Flow ?)
We see as through a glass darkly.
When our attention is fully focused we feel flow
Games provide this as does physical exertion
There is a danger in that the pleasure that comes for this as it becomes a end I itself. A dead end
Internet of Things
There is no internet of things. There is an internet of signs or representations of things.
Things are represented by language or sign systems.
These signs are the content of thought and language.
A database of things only means anything to the user who seeks out meaning there
Real meaning is in the relationships between things particularly people.
The cutting (bleeding) through in Language happens when some of these type of things occur
A disjunction or omission which disrupts the flow of logic
A seemingly unresolvable contradiction
So anything which disrupts our habitual and conditioned expectations forces reevaluation
From conflict comes reassessment and new meaning arise.
Tue, April 1 2014 » FC2_2014 » No Comments » Author: softscan
AVATARS OF STORY
Ryan begins with the Journalist questions What (Chronicle), How ( Memisis )When and Why(Plot)
Every journalist /interviewer knows and uses these to construct stories through answers to the questions within the story/context.
She then looks at the very broad semiotic types of Language (speech acts?) and Images defining their strengths and weaknesses.
Language can represent causality,change, temporality, thought and dialogue- chronicle and plot.
Image can make explicit propositions eg causal relationships, evaluation and judgments. It creates immersive space building cognitive maps
These serve as references as to how to best use the different strengths of each type but not enough detail to address complexity..
She does not delve into the plethora of codes active in theatre, painting and cinema such as gesture, proxemics, facial expression, lighting, point of view, editing and framing.
Propositions and assertions are named as functions.
She analyses real-time commentary (of ball games) and sees the role that interpretation takes in retrospective linking recently described events into a meaningful sequence.
In building Auras in Aurasma we are building single events as islands to be linked. Our script is linear.
It is conceived as a linear network. As a real network where every node can be visited she points out that that is true but the be a story no node can be visited more than once.
For interaction to happen she mandates that there must be a range of choices.
The readers choice at every decision point determines NOT what can happen next BUT the ORDER of presentation of events.
Moreover, and not dealt with, it is the content of the assertion made in a sequence that determines the flow of meaning from one sequence its successor. Its causal link. The logic.
To deal with this we need to look at assertions at a micro level. I always saw this as the model of subject – verb- direct_object clause for example “actor1 did Action with Prop to Actor2 and Reaction1 resulted”
or “Fred hit Don with the Bat and Don collapsed”. Here the second action “collapsed” becomes first action of the next sentence “Don Collapsed and the Paramedic revived him with Ammonia salts”.
So it is the chain of actions which determine the “meaning/theme” of the sequence and the last event which makes the link
So the plot can be a series of events interpreted as a series of statements (standing as indexes of the events) and a statement standing for the whole sequence. This one form of plot graph.
I digress to a web search on State Transitions and find I am on the same turf.
Its an excerpt on Narrative and AI where they compare the results of computer generated conversational responses with those mined from blogs (as sentences / or phrases) ans selected for relevance. The blog mined data was judged more realistic.
Then I am at narrativescience.com a business “story” service
Quill mines your data and creates an appropriate narrative structure to meet the goals of your audience.
Using complex Artificial Intelligence algorithms, Quill extracts and organizes key facts and insights and transforms them into stories, at scale
Quill uses data to answer important questions, provide advice and deliver powerful insight in a precise,
Digital narratives are defined as
Simulative NOT representational
Emergent NOT Scripted
Simultaneous NOT Retrospective
Some questions arise here. She frequently states that in practice oversimplification is not useful
Many narratives may mix these modes.
Simulations. Games are simulations. Simulations represent something external to themselves.
Narratives are representaional.
So narratives can contain representations derived from simulations that they contain.
Ryan also refers to another useful model of Jesper Juul, the State Transition Machine which consists of five elements
1. A finite set of states
2. An input Alphabet (Set of possible user actions)
3. A Next State transition function
4. A start State
5. A finish state
So I searched the keywords “narrative” and “algorithm” and “state Transition” the latter which returned examples of code. I now have to find definitions of “state” and “actions” within the State Trasition Machine
So I move to another source. Hermans “Story Logic”
He puts forward frameworks from a number of sources I will not name, mainly dealing with language based story
Story is defined as an ordered set of Propositions
Readers use sequences of propositions to explain a story.
So a reading derives a single abstracted proposition from all the constituent lower order propositions to explain “What the story means” .
So back to the elements that make up propositions.
At the level of the verbclause stories are made up of sequences of delierate, gaol oriented events.
Verbs cover Events, States, Causes, Actions and Motions
Verbs are the semantic core of all clauses
Events are changes of State
Behaviours are comprised of STATES, EVENTS and ACTIONS.
Actions are one type of Event
Entities manifest as Nouns– people, things and places
Statives (state, existants)) have a scope of the total event
Actives(action) have the scope of all the events components and sub-processes.
Causatives (causes) express determinations between two or more existants
Stories don’t just deal with things/existants but causally intertwined sub-processes
Stories prioritise Causes
Now inserting these into the State Transition Machine
The finite set of states – collection of all the states of all the existants.
A Next State Transition Function – Events are changes in states, what causes an event. A causative
The Start State – In terms of story arcs there is an initial problem state
The End State is the Resolution
The Input “Alphabet “ (Set of possible user actions) is all the verb clauses ?
Tue, April 1 2014 » FC2_2014 » No Comments » Author: softscan