Being connected via wearables without your mobile device is already a reality with untethered Tech, like Android Wear and the Samsung Gear S2, which both support e-SIMs tapping into your pre-existing cell network at no extra cost. It’s a good bet that every smartwatch brand will have an LTE version by the end of 2016, which means that while there’s a vast number of facts and untold nuggets of information that could surprise even big data’s most ardent followers. Big data is about to become behemoth data.
Every day, we create 2.5 quintillion bytes of data (that’s 2.5 followed by a staggering 18 zeros!)1 – so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, the Curiosity Rover on Mars, your Facebook video from your latest vacation, purchase transaction records, and cell phone GPS signals to name a few. Google alone processes 3.5 billion requests per day and stores 10 exabytes of data (10 billion gigabytes!)2
Whether it’s tracking driving habits for the purpose of offering insurance discounts, using biometric data to confirm an ATM user’s identity, or using sensors to detect that it’s time for garbage pick-up, the era of the iOT in which “smart” things can seamlessly collect, share and analyze real-time data, is here.
Imagine a world where your watch recognizes that you withdraw cash every Saturday so that you’re ready for the neighborhood lemonade stand and your evening outing, and you haven’t made your usual transaction yet. A helpful alert pops up on your device, and another reminder displays when you’re within a ½ mile of your Bank ATM where a retina scan allows you to withdraw funds. Your Smart Refrigerator identifies that you’re running low on eggs and yogurt, while your wearable identifies an open parking space within 50-feet of your favorite Saturday farm market stop, but cautions you that there’s a marathon starting in 2 hours so you better get a move on. A “ping” in your email indicates that the killer little black dress you’ve wanted just became affordable with a special discount coupon you received as you drive past the store. While you’re away, the sun comes out, so your Smart Home lowers the window shades, turns the A/C up a few degrees and suggests adding popsicles to the grocery list. Like any fabulous assistant, technology not only aids you, but anticipates your needs and helps you make smarter, faster decisions based on “advice” you can trust. This is the best way to use Big Data.
Having the ability to be smarter, faster and always connected without having to carry around a device (or anything at all)…great.
Using Big data to synthesize all of the fragmented individual data points into an orchestrated, holistic, powerfully intelligent view of the customer to help them during these everyday micro-marketing moments…priceless.
Big data allows brands to go beyond customer motivation and engagement in driving value exchange to allowing them to foster their brand affinity and cultivate their customer’s evangelism in real-time, responding to their customer’s behaviors even as their activities and likes shift.
Although simple in concept, many brands are struggling to get it right (or get started at all). Leading brands have already gained a powerful competitive advantage by adopting consumer management technology that allows them to understand and engage based on individual consumer preferences and observation of behaviors and buying signals in their Buyer and Customer journey – thus taking a big step toward making big data a strategic reality.
Is big data, or really behemoth data, really the answer all by itself? There is lot of insight to be garnered from that data, but the key is being able to quickly sift through it all, tuning out the noise to focus on the key patterns and meaningful relationships in that data.
Traditional statistical analytics techniques which focus on finding relationships between variables to predict an outcome simply won’t do when the goal is to optimize decisions using massive pools of data that are growing and evolving on a near-continuous basis. This is where machine learning comes into play and brings the needed “giddy-up” to the analytic component. Machine learning evolved from the study of pattern recognition within the field of artificial intelligence. The easy way to think about it is, it provides computers the ability to learn and improve without a specific program being written to tell the computer to “learn and improve”. Machine learning software identifies patterns in the data in order to group similar data together and to make predictions. Whenever new data is introduced, the software “learns” and creates a better understanding of the optimal decision. Think of it as the automation of the predictive analytic process.
There is certainly a lot of overlap between statistical analytics and machine learning but there is one key difference. The former requires that someone formulate a hypothesis and structure a test to evaluate whether that that hypothesis is true or not. For example, a hypotheses that states a particular marketing lever (i.e. a certain offer or message) will generate or “cause” additional account openings or sales. Machine learning does not worry about hypothesis testing and simply starts with the outcome that you are trying to optimize – sales for example – and uncovers the factors that are the drivers. As more data is introduced, the algorithm learns and improves its predictions in almost real time.
Interestingly, machine learning has been around for decades. But now, due to the massive explosion in data, cheaper cloud based data stores, and huge increases in computing horsepower, the interest in machine learning is really starting to hit its stride.
Laura Watson is Strategy Director at Harte Hanks, and Korey Thurber is Chief Analytics & Insights Officer at Harte Hanks. Harte Hanks can help your brand leverage big data, contact us for a free assessment.