Slide Images

Wednesday, 30 May 2018

Skynet, Gluten-Free Cupcakes, and the Paradox of Automation

https://www.copyblogger.com/wp-content/uploads/2018/05/paradox-of-automation-700x352.jpg [ad_1]

"Smart automation amplifies your empathy." – Sonia Simone


This is a weird time to talk about marketing automation.


If you spend time on social media, you’re probably fed up with the plague of misinformation bots that cluster around any important conversation.


Then we found out that companies like Cambridge Analytica had put their slimy hands on our data by taking advantage of our goofy cousin, the one who still isn’t too savvy about the purpose behind that What Kind of Cheese Am I quiz.


Just to add to the fun, we have the GDPR, a well-meaning law that has sown more than its share of chaos and confusion.


So here I am, wading into the middle of all of this noise to talk about automation, and why it isn’t what we think it is.


I wouldn’t do it if I didn’t think it was important. And here’s why.


Terminator is the dominant myth of our time


I would argue that the story with the lion’s share of our cultural attention right now is James Cameron’s invention, the Terminator — that relentless, smart cyborg serving the needs of an all-powerful Skynet.


We made the machines so smart that they realized they’d be better off without us.


That new incarnation of the Frankenstein myth is the underlying Big Idea of our culture today. How many of your friends have shared those videos of robots figuring out how to get out of their labs? Or the story about Amazon’s Alexa listening in on conversations?


Manufacturing jobs have become rarer than good Adam Sandler movies. And where did those jobs go? Increasingly … to robots.


We continue to automate and automate, even when it seems like it really might be counter to our best interests.


And then we hear about this “marketing automation” thing, and it feels like a great place to bar the door.


“Nope. Not this one. We’ve already gone too far with this.”


But I think that would be a mistake.


The word “automation” is too broad


Is it “marketing automation” if you use a service to send out your email newsletter? After all, in the olden days, businesses used to have to print up pieces of paper and send them to you in the mail. Now, we just enter some messages in a system and, poof, they wind up in our audience’s inboxes.


Let me be clear: I am the first person to whine about my crowded email inbox. Until I get a coupon for my favorite pricey sunscreen. Then I’m very happy to click and order.


If we have a problem with the concept of marketing automation, then we probably have a problem with the idea of digital business. The ebooks we sell, the tutorial videos we publish, the blog posts and infographics and podcasts, all rely on automation to deliver messages to our audiences.


And that doesn’t seem creepy — at least not to me. Probably not to you either, if you read this blog.


Okay, so maybe the problematic automation is the Cambridge Analytica kind. What I call strip mining privacy. Being shady, creepy, and unethical about how you get your hands on personal information, and what you do with it.


I’m all for figuring out how to stop that from happening. It’s unethical and should probably be illegal.


Political consultancy Cambridge Analytica appears to have used data obtained in shady ways to serve up misleading and extremist advertising during crucial recent elections — the U.S. presidential election and the “Brexit” vote.


Is the problem how Cambridge Analytica got the data? Partly, yes, it is.


I think the bigger problem, though, is the pack of lies they delivered by using that data.


We think that advertising can sell us something we don’t want


There’s one way that advertising can sell you something you actually don’t want: it can lie to you.


It can tell you a candidate is pro kittens and flowery meadows, when in actuality the candidate has a long history of vile anti-kitten discrimination and personally destroys flowery meadows as a hobby in their spare time.


Targeted political advertising can increase the echo chamber effect. And we know it’s been used to inflame grievances on every side of the U.S. and international political spectrum. It isn’t harmless.


But it can’t sell you a candidate whose values you don’t believe in, unless the ad lies to you.


I would posit that it’s not the targeting — a weirdly militaristic way of saying, “delivering information people actually care about” — that’s doing the most damage. It’s the lies.


And bringing it back to our businesses: you and I, assuming that we tell the truth, don’t have the power to sell anyone anything they don’t want.


All we can do is communicate value and get our messages in front of interested people.


Houston, we have a (language) problem


The vocabulary of automation is its own worst enemy.


Automation. Technology without human judgment or oversight. Skynet.


Bots. Mechanical thugs used to deliver or amplify racist jokes, misinformation, and death threats.


Targeting. Homing in on someone to fire a weapon at them. This is a terrifying word. No sane person wants to be targeted.


Progressive profiling. Two words with intense political connotations, neither of which has anything to do with what the term means. Awesome.


What automation can actually do


If I see a Facebook ad for some comfy socks, or a cute fair-trade dress, or tasty gluten-free cupcakes, do I feel like someone is trying to fire a weapon at me?


No, I feel like someone has been paying attention and respects my time enough to show me something I’m actually interested in.


As it happens, I’ve bought very good socks and some great fair-trade dresses from Facebook ads. Still waiting for someone to show me that perfect gluten-free cupcake.


Automation doesn’t take away our free will. It doesn’t have the power to do that.


And the delivery of a relevant advertisement isn’t the same thing as a robot taking a human job. It’s not helpful to use the same word for both.


What automation does have the power to do is amplify our empathy.


If you know your audience — truly, madly, deeply — you know what they want. You know what they’re worried about. And you know what they delight in.


And you can use thoughtful, truthful, respectful automation to deliver the right messages to the right people.


Not to lie to them or trick them. But to respect what they care about, offer them something of value, and support your own business by connecting with people who value the same things you do.


Good automation vs. creepy automation


The biggest problem with the Skynet metaphor for marketing automation is that it’s … wrong.


Marketing automation does not make decisions. It’s not coming up with creative ways to talk to people. It’s not figuring out how to get out of the lab it was built in.


It just delivers messages written by real, empathetic human beings.


Marketing automation lets your audience choose what they want to know more about, and filter out what isn’t relevant to them.


Smart, ethical automation is fundamentally about respecting your audience. Respecting their time, their interests, and their intelligence.


(Which means we don’t try to pretend that automated messages are being written on the fly. Which no one would buy, anyway. Does anyone in your audience think you individually sit down and craft your email newsletter for them and them alone? No.)


Laws like GDPR, clumsy though they may be, could have some value in reminding marketers and businesses not to do ridiculously shady things.


Not to scrape a bunch of data from Person A because Person B gave consent. (Come on, now.)


Not to steal or buy data people gave without realizing what it would be used for. (Come on, now.)


Not to be cavalier with data, particularly sensitive data, and through your sloppiness allow it to be compromised. (Come on, now.)


Please let me know what you think


I’ve been thinking about putting together a document of recommended best practices for how we as digital businesses handle automation and information about our audiences.


This wouldn’t be intended as legal advice (I am in no way qualified to give you that). I’m thinking more of a kind of manifesto — a call for discussion and agreement on an ethical standard.


Not the “Sonia has strong opinions” version. (Although I do, in fact, have strong opinions.) But a document that looks for consensus around what Good Folks do when striving to deliver a more relevant message.


I hope you’ll drop me a comment below with your thoughts! And if you have a resource or connection for me, that would be excellent as well.





[ad_2]

Read_more MMO mastermind

No comments: