The theory goes that our technology can enable us to have a sixth sense. With the computer I keep in my pocket, I can find directions, addresses of local businesses and amenities, restaurant reviews
Pattie Maes of MIT demoed this at TED, from the Fluid Interfaces Group:
The flashy bit is actually the least important part of the technology – it’s not about the projection mechanism – it’s more about the interpretation algorithm. Software that understands context.
While I have seen ‘wearable’ interfaces before, they tend to look a little dorky – but then we all thought Lieutenant Uhura was odd with her earpiece and now every second car driver wears a bluetooth earpiece. Why the earpiece is not acceptable as pedestrian wear is beyond me. It would seem to be obvious.
And with the recent innovation of adding voiceover to one of the smallest music players on the market – the player reads the text of MP3 tags to you – it’s not long before we can expect something similar in the iPhone and other smartphones. Receive a text – yeah, just read it to me. Email, yup. Tweets, sure. Tell me that your battery is low. Give me email filters to read out any message marked urgent or those from certain people.
I don’t need a projector around my neck – I want something which will, for example, use Bluetooth to identify folk I meet (gee, need some handy peer-peer tech there), immediately fetch me their social data from Linkedin, Facebook, Twitter or whatever is popular, and then feed that information into my earhole so I know what to say. Yes, there’s technical difficulties here (though a lot of interface choices could be handled with a chording approach or using the thumb to select from four finger choices).
I think the problem is that I’ve ranted about this before. I want a Ghost in the machine which will handle some of the mundanes for me. I want it to feed me with more info, on demand. And what’s best – this is all UI. It’s about putting a more human interface onto existing technology. At the moment we only really use our eyes for data and we respond to much more than just visuals. When we watch a movie, it doesn’t matter if the screen is huge or fits in our pocket as long as the sound is good. We need to be using the aural sense a lot more and not just for indignant beeps.
It’s not the projector round your neck or the attached camera. That’s information and interface without privacy and exclusivity. I’ll happily carry round my iPhone if this was available – I already carry it everywhere.
Heck, I’d even settle for something that could just tell me yes or no.
Wow Matt, that’s deep for a Saturday, when
all I could do was figure put which order to
listen to the tracks off “Across The Universe”.
But, to me sixth sense is an API and SDK
for Matrix. Oh. And a USB, Universal Synaptic
Bypass. Why worry about providing sensory
input when it can be fully simulated, packaged
and branded?
The real challenge I see I’d that
the encoding scheme for neural impulses
is more related to analog than digital computing
but nothing a few “sliders” on the UI can easily
accomdate that.
overcome.
I also think this would create far more interesting
paradigms and incentives for QA.
I agree – Sixth Sense should be a set of interoperability standards and an API so that your phone can talk to your laptop and both can talk to your shirt-mounted heart monitor, your ear mounted bluetooth headset (which can also take your body temperature), your shoe mounted pedometer and your wrist mounted pulse meter (which also tells the time, handles authentication and provides context to the system on what you’re doing.
Not sure I want a USB considering the number of bugs in corporate software and though I could be convinced, not many are going to put a dirty plug into their nervous system 🙂 Very William Gibson/Walter Jon Williams though.
Recording neural impulses is analog, yes, but then computing is able to handle analog – I was reading last night about heuristics (I’d studied it before but then this boatload of books arrived from amazon – and this is on the back of me trying to put together a business case for myself to do something with voyheuristics.com.). It’s all about the analog – about the things that happen ‘most’ of the time. We’re teaching computers to interpret our conscious responses, they’re already well able to interpret our other analog data. I look forward to having a computer give me an opinion based on my analog data.
Not to be picky, but if you have a heart monitor, the only need for a wrist mounted pulse meter would be to calculate pulse wave velocity – or of course, to verify the identity of the wearer and report to big brother!
Pulse wave velocity (and amplitude) will help determine other factors as well – such as blood pressure, blood loss, constriction – you have to look at the delta as well as the absolute value.
The real interest in all of this is the Delta – changes of state – whether that’s mood (How are you feeling today?), diet (Have you eaten your breakfast? There’s a Wireless Weigh-in station nearby, spare a minute to record your weight!), movement (You’ve not gotten out of bed and it’s after noon?) or other health related factors (Taken your pills? Your heart is erratic. Your pulse is low. Summoning a paramedic).
As a “healthy” person, I’m concerned about maintaining my wellness. And if my health monitoring changes dramatically, I’d want others to know.
The reporting of this data is where most folk, especially the ACLU types, would like to see DRM applied. That would limit the reporting to BigBrother but, at the end of the day, “You have zero privacy anyway, get over it”
Perhaps we’ve been thinking about this all
wrong. Perhaps the sixth sense isn’t another
awareness sense, and just one more than
the first five. What if it is a meta-sense, an
awareness of all the other senses?
Computers can only execute one instruction
at one time, but they do it very quickly,
so it gives the illusion of simulaneous.
Neural pathways, associative memory access
really do work in parallel, WITHOUT
enquing or semiphore or other artificial
means of synchronizing processes. And
sometimes there are neural race conditions
that generate instances of déjà vu.
I think that depends on the definition of computer.
Parallel computing is a huge research field and I would go as far as to say there are computers that can process in parallel. Again it comes back to your definition of computer.
http://en.wikipedia.org/wiki/Parallel_computing
I’d user Turing or Von Neumann as a good starting
point as definition. CPUs still fetch, decode and
execute only one machine instruction at a
time. Multiple Processors or Parallel Processors
are just aggregations of single CPUs.
Eventually it all comes down to some form
of synchronization, with or without hardware
or firmware assist.
If you want to get even deeper. A machine
cycle can still only do a fetch, execute or
memory/register operation. Even deeper, there
is still only AND and OR, or NAND and XOR.
Compared to REAL parallel processing of
(say) the eye/optic nerve/electron potential
via neurons across synaptic junctions.
Now THATS a computer.
But what if the sixth sense is processing
synchronization across the individual
synaptic pathways of the optical, olfactory,
aural, gustatory and tactile subsystems.
What if you could “taste” music, or “smell”
blue or “hear” sour?