Gauging User Satisfaction with IoT
How do you evaluate how users interact with a physical experience? This is a question we have pondered for some time. Increasingly physical spaces and digital experiences are merging. Because of that, there is a need to go beyond an inquiry as a means of analysis. We decided to take a stab at seeing how we can continually evaluate a physical experience using image analysis, machine learning, and IoT.
First, I love the R&D focus at Universal Mind. Over the years, this has led to incredible innovations and solutions like SpatialKey, Journeys, iBrainstorm, and the iPad Table. While some of these efforts were formal projects, others were simply intelligent people working together to solve a problem in a new and unique way. This effort was no different.
Back in late February, we started the process of designing a concept. This concept utilized image analysis to detect emotions. We would then use the collection of the emotions and few facial indicators to calculate a sentiment score which focused on satisfaction. Based on this value, we could determine where the user fit on a spectrum from completely dissatisfied to completely satisfied.
To accomplish this, we would need to leverage a collection of IoT devices which were smart enough to detect faces and collect images of those faces. These devices would need to then pass these images to a cloud serverless workflow so that we could analyze them and calculate the sentiment score. Finally, we would need to aggregate all of this information and analyze it in both a near real time and a historical manner.
We were able to build and deploy the initial MVP solution in six weeks.
Based on our needs we moved through several prototype devices. We ended up with a device that is powered by a Raspberry Pi 3. We utilized a development approach that allowed us to remotely manage the devices and deploy code changes rapidly. Even on the first day of deployment, we pushed several updates out to all of the devices.
Sentiment Analysis Device V1
For cloud platform, we embraced a serverless workflow with AWS. We utilized AWS Lambda with AWS Step Functions to create a series of analysis steps that eventually generated the core metadata and sentiment score for each face based on emotions detected in the image. In addition, we utilized core features of AWS IoT to manage device configuration and communication. The only aspect that wasn’t serverless in our approach was the real time data analysis for which we leveraged ElasticSearch.
We’ll provide a rundown of our serverless architecture in a future post.
The First Rollout
Our first rollout of the solution leveraged our Grand Rapids office and tracked sentiment for an entire work day. This provided to be to be a compelling case as we also had an event that night in our office where we showcased new AR/VR technology. This provided an excellent opportunity to analyze many different faces across the entire office. By the end of the day, we had analyzed over 27,000 images.
We included the real-time dashboard of the sentiment data (seen below) where individuals could watch the progress throughout the day. It was validating to see the average sentiment score map very carefully with our ghost map, our estimate that we had provided at the beginning of the day. It was interesting to see how the sentiment changed once our evening event started in comparison to throughout the work day.
Sentiment Analysis Realtime Dashboard
This is just the first phase of what we are looking to do with this R&D effort. We will be working to create new use cases that are powered by this sentiment analysis platform. It is amazing what can be accomplished by combining cloud capabilities, machine learning, and IoT in a relatively short period.
Curious about how your business can leverage IoT?
Schedule a chat with one of our IoT experts to see what is possible.SCHEDULE A CHAT