FANCI is a two year research project that received €2.5 million in funding from Horizon 2020, the European Union Research and Innovation program. This includes leading companies in the domain of Eye Gaze Tracking, bare-hand gesture interaction, emotion sensing, facial recognition and embedded DSP systems.
I successfully led this project for Harman International, working on UX and HMI design, architecture and API design, future proofing, and connecting tangential emerging technologies with the project. I presented the first demonstrator at CES 2016 to major OEMs executives, partners and journalists, generating positive media attention for Harman and the consortium companies (e.g.: CNET)
I co-authored IEEE paper 16597244 "Sensors fusion paradigm for smart interactions between driver and vehicle" - nominated Best Industrial Paper for IEEE Sensors 2016 conference in Orlando, FL
More info on the project at: http://fanci-project.eu/
̌
An in-car touch controller that can change shape to communicate with the driver intuitively, and guide the interaction almost subconsciously. This reduces information overload and lowers driver distraction by complementing vision and hearing based HMI channels.
For this project I worked on ideation, patenting, UX design, feedback methodologies and implementation mechanisms. I also led and directly managed the company that created the first official prototype and the company that created the official video.
This was a really interesting project with some deep HCI aspects such as the idea of "load balancing" the human senses. In the vehicle, the driver's visual channel is already occupied for the primary task of scanning the road and maintaining the vehicle direction and safety. That's the best and safest use for the driver's eyes.
The auditory channel is also often occupied by listening to music, podcasts or audiobooks, or in a conversation with a passenger. The auditory channel is also already utilized for emergency notifications from the vehicle therefore not the best channel to overcrowd.
The tactile sense however is often underutilized in the vehicle. Some modifications to the HMI interface can allow the Infotainment controller and the steering wheel to change shape, surface texture or stiffness in order to communicate the current menu position in the infotainment system, suggest the best interaction method with the current screen, orient the driver and provide feedbacks without using visual or auditory signals.
More details here
̌
A co-development project between Harman and the Soli team at Google’s Advanced Technology and Projects group (ATAP), led by Dr. Ivan Poupyrev.
We integrated the Soli sensing technology in a connected home speaker to enable people to control audio without touching a single button, knob or screen. This collaboration with Google ATAP not only resulted in integrating the high-performance radar-based gesture sensor into a JBL speaker but in designing a novel light-feedback based HMI to communicate with the user.
This demonstrator was showcased and demonstrated live on stage at Google I/O conference in 2016.
More info on this project: https://ixds.com/projects/project-soli-google-ixds-harman
HARMAN’s Pupil-Based Driver Monitoring System measures increases in pupil dilation as an indication of a driver’s mental workload. HARMAN’s new proprietary eye and pupil tracking system detects high cognitive load and mental multitasking in the driver’s seat, and signals the car’s other safety systems to adapt to the driver’s state. The technology represents a major step forward in the domain of Advanced Safety and Driver Monitoring Systems (DMS) for vehicles.
Adoption of in-cabin cameras is growing rapidly, enabling features such as occupant detection and driver drowsiness monitoring. With the introduction of high cognitive load detection, HARMAN’s eye and pupil tracking technology brings additional value to the driver-facing camera. The technology eliminates the need for complex sensors built into seats and steering wheels, or biometric sensors that require physical contact with the driver. This camera continually captures the driver’s pupil dilation, and a proprietary software algorithm analyzes the pupil reflex using advanced filtering and signal processing. The filter isolates and identifies responses triggered by high cognitive load. The calculated outputs are used to intuitively adjust user interfaces, like placing mobile devices in do-not-disturb mode or adjusting ADAS system intervention thresholds to minimize physical and mental distraction to the driver. To learn more about the solution, view this video demonstration here.
I personally worked on this project from conception to first prototype. I managed internal and external resources for this project and showed the first proof of concept at CES 2016.
HARMAN’S Pupil-Based Driver Monitoring System was also named a finalist in CTIA’s 10th Annual Emerging Technology Awards.
This system was also showcased in: https://news.harman.com/releases/harman-showcases-todays-ultimate-in-vehicle-experiences-at-gims-2019
More info at: https://car.harman.com/solutions/adas/driver-and-occupant-monitoring-system
Head worn auditory listening system which processes all auditory information in order to selectively cancel, add, enhance, or attenuate auditory events for the listener. This system can improve a user’s hearing capabilities, auditory user experience and safety in noisy environments.
I was part of the ideation, patenting and refinements for this concept and I directly managed the creation of the first 3 proof-of-concept prototypes. I also managed the creation of the UX video you see on this page that was presented at CES and other trade shows.
This system was created to give the driver a better visual awareness of the vehicle’s environment, in particular close and low. This system enables the driver to see through the dashboard using robotically actuated stereo cameras and view dependent rendering.
I created the proof-of -concept and different prototypes, including mechanically actuated cameras, view dependent rendering, and stereoscopic 3D rendering.
Patent Application:
http://www.freepatentsonline.com/y2015/0245017.html
Prototype and interaction proof-of-concept for a system that allows the driver's surround stereo panorama to be rearranged with hand gestures.
The driver is able to grab and move sound sources with his hands, via gesture control. Music, phone calls, and alerts can be handled intuitively. This is particularly useful in combination with HARMAN’s ISZ, where sound sources can be passed on to fellow passengers.
I worked on ideation, concept refinements, building of the first prototype and I directly managed the creation of the further-refined versions that followed. The portable speakers we used create a nice visual feedback that shows where the audio is coming from while the driver/passenger is moving it with their bare hands
Probably my favorite part of this concept is the idea of input-output coincidence applied to sound and gestures. It was a very unique study and resulted in a very satisfactory prototype to use!
Patent Application:
http://www.freepatentsonline.com/y2015/0193197.html
Realized via an array of ultrasonic speakers, gesture interaction and spatial displays profit from becoming more physical and tactile to the user.
We worked on leveraging this emerging technology in the vehicle to allow the driver to have a more intuitive experience and eyes-free control over the Infotainment System.
Finding the right area where gestures are recognized, feeling the feedback of a gesture and the acknowledgment of the system, allows the driver to interact in a more precise, less distracting and more satisfactory way with the vehicle.
I worked closely with our technology partner to create a new generation of mid-air tactile and gesture HMI concepts for the vehicle and consumer products. I presented the first prototype concept at CES 2016 generating positive media attention from The Verge.
I successively worked on the next-generation standalone prototype to showcase potential automotive UX and applications.
I then worked on the creation of a final prototype, fully integrated in a vehicle and presented at CES2017, which also generate positive media from Extreme Tech.
More here.
The system is able to capture the driver’s eye gaze and eye vergence to trigger information on a transparent display or HUD. By using eye contact, eye gestures, and simply focusing on the area of interest, the driver is able to interact intuitively with the car without using his hands or voice.
I designed and built this system including the projection based transparent display, interface, interaction method and vergence detection using a standard Tobii Eye Gaze tracking device
I also created a video showing the UX of this system.
Patent application:
https://www.google.ch/patents/US20150192992
Similar and complementary to the Shape-Shifting Controller, this project's goal was to counteract information overload and distraction. With the Future Experience team at Harman International, I researched how to create an in-car touch controller and armrest that can change their surfaces to convey information intuitively to the driver.
Different surface textures for an armrest or controller can intuitively communicate how to interact with a system, a menu or a in-vehicle functionality. Transforming an input device in input/output also offers the benefits to load balance the human senses for the driver, bringing tactile feedback into the loop.
I worked on the ideation, patenting, proof-of-concept research and creation of the video that outlines the in-vehicle UX.
Leveraging the latest technology to use the space between the driver and the steering wheel to display information. Important contextual alerts can be moved into the driver's area of focus without being distractive.
During this research we studied how to use the Z-dimension in order to make the driver perceive time-sensitive alerts closer to their face and less relevant alerts, further away.
Similarly, navigation directions can move closer to the driver's eyes as they become more time-sensitive (e.g. as a turn approaches).
I created this in 2012 at HP Palm as part of a research in non conventional uses of tablets and pen. This prototype leverages a OptiTrack Motion Capture Systems and the code was written in C.
Patent application pending:
https://www.google.com/patents/US20130222381
̌
During my first year at Hewlett Packard (my first job out of college and debut in the Silicon Valley!) I worked for the Emerging Platforms Group - a team tasked with conceiving new hybrid platforms and bringing to market innovative products.
This was also the time before HP decided they needed their own operative system and moved forward with the Palm acquisition. At this time HP was creating computers, mobile devices and hardware solutions that were powered, on the OS side, by collaborations with Microsoft and Google. Unfortunately not all of these collaborations went through smoothly! I personally believe it was this product that actually pushed HP to eventually acquire Palm in their quest to have control over their own OS. Let me explain further.
During this time at the Emerging Platforms Group I had the privilege to work with a very talented group of individuals, and together we envisioned, developed and shipped what became the Compaq AirLife 100, the world’s first Android Smartbook, based on the Qualcomm Snapdragon chipset architecture.
The AirLife 100 was a clamshell device, connected by traditional WiFi but also featuring 3G cellular connectivity via SIM card. It had a physical keyboard and trackpad, but also a resistive touchscreen, a camera, and best of all, it was running Android OS! At the time, it was a really unique product with a really unique set of features.
Unfortunately Google didn't think the same. Android OS was open source so they couldn't prevent anyone from using and modifying it, however they were not pleased with the form factor, too different from a phone for which Android OS was conceived. The result? We still shipped but without Google app store, G Suite and any support to Google's services. A true limitation for a net-book of this kind.
For this project I participated in concept-to-delivery process and worked closely with the Software and Hardware Dev and QA teams. I worked on 3rd party apps integration which was a fantastic first experience in the industry as I coordinated with a dozen of different software companies, learning many Product Management skills, QA and agile development processes, as well as team leadership skills.
AirLife 100 shipped through Telefónica across Europe and Latin America just before HP acquired Palm, putting together their capabilities in creating great hardware with what at the time was one of the main players in the Operative Systems space for mobile devices. And the rest is history ;)
A few links you might want to check out:
HP Spain unveils the Compaq Airlife 1000 Smartbook
Did we go to CES with an EVT Unit? Yup! Check out the Video here