Artificial intelligence is set to become a key tool for technology marketing leaders, but its use will require an organisational ethical code of practice
Artificial intelligence (AI) could become a necessary evil of the modern technology marketing leaders world, but AI technology will rely on CMOs to act ethically.
"AI has the potential to transform the practice of marketing in significant ways over the next decade-plus," says Bryan Yeager, Senior Director Analyst at Gartner and technology and marketing analyst house. Yeager's analysis indicates that technology marketing leaders will use AI for insight and to increase the scale of marketing impact. For the last decade marketing teams have been seeking the nirvana of individually focused marketing, particularly in the business-to-consumer sector; for enterprise technology marketeers context based marketing to the C-suite is also vital. AI is seen as a way of creating this target market of one. In its research Gartner says marketing leaders are already experimenting with AI for customer engagement..
To engage successfully with a customer requires an understanding of that customer, which means gathering a wide range of information, which AI can help with, but this is also where technology marketing leaders need to be acting ethically.
Natalia Konstantinova a data scientist with oil and energy firm Shell is already experimenting with AI to "understand customers and what they need"; a trend that will traverse every sector according to industry analysts. Technologist Joe Garber, Global Head of Strategy at software firm Microfocus adds: "Driving revenue and customer insight by getting closer to the customer and delivering things faster to the market is what is driving the topline and therefore the interest in technologies like AI and robotic process automation (RPA)."
"The negative implications of AI are amplified and people talk about deep fakes and misinformation, especially with elections coming up," Rumman Chowdhury, managing director of Accenture AI said. The implications of Chowdhury's PHD analysis are of particular importance at the moment with widespread discussion in society on how AI was used to change the voting behaviour of the US elections and the Conservative Party EU referendum in the United Kingdom.
"There are no specific regulations for AI just yet," adds Chris Eastham, a director with leading technology law firm Fieldfisher. "In 2018 the House of Lords in the UK released a paper on ethics and AI that stated that they did not believe there should be AI regulations and that instead the technology should be governed by the regulations of each sector." Which is possibly easier to define for sectors such as financial services or pharmaceutical, but less so for technology, which regulators have struggled to understand.
In her research at technology consultancy Accenture, Chowdhury describes how she frames the concerns towards AI into three areas: Immediate, Impactful and Invisible. All three are often interlinked in the case of AI. Speaking at a major technology conference in Copenhagen, Denmark organised by cloud computing leaders Nutanix Chowdhury told business leaders how changes in code cause immediate impact. She adds that AI is code.
Chowdhury tells the story of the automated debt recovery debacle by Australia's Department of Human Services, whereby millions of Australians dependent on the state were told to pay back debts. Many of these debts may not exist. In this case automation and data was immediate and impactful, and in a largely negative way to the consumer and then the organisation.
"We have some protection in the employment laws," Eastham of Fieldfisher says. However reducing worker and consumer rights is seen as a core reason for the Conservatives wishing to leave the European Union and commentators state that the USA is looking to reduce rights in its economic competition with China.
"The danger with AI is the implementation by human beings, either intentionally or unintentionally," Chowdhury says of how AI is not a dangerous technology, but if implemented without an ethical basis it can cause harm.
In the autumn of 2018 it was revealed that a recruitment algorithm developed by Amazon was biased against women candidates. Eastham at Fieldfisher warns marketing technology leaders to be aware of "development bias"; a result of software developers writing code that too closely reflects their own heritage.
"There is not a single set of values. We have to accurately reflect society and at times work against it," Eastham said.
Chowdhury at Accenture believes organisations are aware that in deploying AI they need to develop an ethical stance. "Now clients are asking for governance methodologies, not just the technical tools to implement AI," she said, adding that organisations need more than governance, the approach to AI is intrinsically linked to the culture of the organisation.
Eastham said: "I want to see AI used for the common good and for society to prosper and develop so that society improves."