Category Archives: Artificial Intelligence

New and ongoing developments in computers that can think for themselves.

Top 8 Chatbot WordPress Plugins to Capture More Leads


The Best 13 WordPress Chatbots For Your Website in 2024

best chatbot for wordpress

The API provides essential functionality like segment integration, conversation sidebar, and more. Chatfuel is one of the most robust bot-building platforms that allows individuals and enterprises to create AI conversational bots. It is effortless to understand and manage and requires no programming skills. Botpress plugin is very popular worldwide, with more than 9.9k stars on GitHub and 3.5k active community members.

With basic chatbots, you can write questions that sound like human speech. You can foun additiona information about ai customer service and artificial intelligence and NLP. Plus, it typically won’t allow users to provide custom responses, which means it can only cover common conversational pathways. It comes with a simple drag and drop interface which makes it super easy to set up a chatbot for your Facebook page.

best chatbot for wordpress

This free WordPress chatbot engages visitors in conversations to grow email lists, generate and qualify leads, and set appointments. It also features Chat PG an interactive FAQ feature for educating customers on products and services. You can easily add a chatbot to your WordPress site using ChatBot.

One benefit of Chatra is its live view of visitors and their carts, which can provide valuable insights into customer behavior and help with targeted marketing efforts. The chatbot plugin also has a simple and intuitive design, making it easy for website owners and visitors. WordPress chatbots don’t always have the best analytics tools, so this can help. This is the best WordPress chatbot as it’s armed with the essential functionalities a business might require for seamless communication with visitors. The tool offers rich customization options so that the chat widget design corresponds with your brand style.

AI Chatbot is a great Product and the Support is superior!

With Tidio, you can build chatbots quickly using 35+ pre-defined templates. You can also write answers for commonly asked questions and Tidio will deliver those responses when customers ask similar questions. NLP and ML help chatbots detect customer intent and generate accurate responses to user concerns.

Let’s go through each of these platforms and explore them more in-depth.

Best AI Chatbot Platforms for 2024 – Influencer Marketing Hub

Best AI Chatbot Platforms for 2024.

Posted: Thu, 28 Mar 2024 07:00:00 GMT [source]

They can be useful if you have a high volume of requests to process on a daily basis. This real-time communication tool is very often used in customer service and technical support, as it offers a fast, direct response to users’ needs. Adding a chatbot to your WordPress website will allow you to provide 24/7 customer support to your visitors, even when your support team isn’t available. ++ Upgrade to WPBot Pro to power your ChatBot with OpenAI (ChatGPT) fine tuning and GPT assistant features.

These tools make it easy to hand off your tasks to automate the sales process. Displaying your products through cards and carousels is a simple way for customers to swiftly discover what they seek. This feature is even more helpful when showcasing ongoing product discounts. Once a choice is made, the chatbot seamlessly integrates selected items into the cart and provides a concise order summary, streamlining the shopping experience. Enjoy this list of seven WordPress chatbot plugins based on their features and online reviews. There’s no AI incorporated, but you can integrate it with tools such as Google Docs, Slack, or email to streamline the transmission of captured data to your preferred form of intake.

Products

Sure thing, there is a custom plan that may be ideal for a big organization. In reality, not everything people call AI has anything to do with real artificial intelligence. Moreover, small and mid-sized companies rarely need AI-powered chatbots that require tons of data to process for correct work. Leave your email address with a chatbot, and maybe someone from a support team will get in touch with you later. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please enter your name and email address above.

7 Best AI Chatbots For Your WordPress in 2023 Magnum Learn – Magnum Photos

7 Best AI Chatbots For Your WordPress in 2023 Magnum Learn.

Posted: Thu, 29 Feb 2024 11:22:02 GMT [source]

MyAlice is modern customer support and social selling platform that helps hundreds of growing ECommerce and DTC brands daily. It connects all your socials and website inventory in one platform, streamlining customer interactions like order process, order canceling, and order updates in one browser tab. With the plugin, you can monitor your agents’ conversation quality with the customers, their conversion rates, and response time to improve your customer support.

What is the Best WordPress Chatbot?

After customizing the chatbot according to your business needs, you should test it thoroughly. Here, it usually involves entering an API key, allowing your chatbot to communicate with the backend of WordPress. As of now, we will provide you with the step-by-step process to install a chatbot on WordPress. You can also make a highly-optimized marketing strategy and present it to your customers after initiating a conversation.

In this article, we will share the best WordPress chat plugins that will help you communicate with your users. What sets DocsBot AI apart from its competitors is that it can also be used to generate AI content. As an AI writer, you can train it to support and write marketing materials while retaining your exact voice and brand identity.

Your decision will probably come down to what paid features, if any, you need to use. Comparing each of these chatbot services against each other can take some time. Hopefully, this list can trim that best chatbot for wordpress requirement down and you can narrow your list to a few possible choices. In the end, since many of these can be used for free, we recommend giving several a try before making your ultimate decision.

How chatbots can help agents improve customer support

You can leave the other settings as they are and click the ‘Create’ button. Simply select an industry of your choice from the dropdown menu and click the ‘Next’ button. After that, toggle the ‘Hide chat on mobile’ switch to ‘On’ if you don’t want people visiting your website from their phones to be able to see the chatbot. Once you are done, don’t forget to click the ‘Publish’ button at the top to store your settings. Next, select the ‘FAQ’ block from the ‘Go to’ dropdown menu and click the ‘Save Settings’ button.

Smartsupp reduces your support ticket volume with fast responses, 24/7 availability, and real-time order updates (for Shoptet). There has been one recorded security or vulnerability issue with Chatbot with IBM Watson. However, some reviewers have noted that the interface looks outdated and may not be intuitive, especially when it comes to inserting HTML code and manually sending surveys. However, Chatra may have limited updates and new features, and order updates are only available through live chat with an operator. It’s a simple yet effective way to qualify leads and move them through the sales pipeline more quickly. Customers can also check on their order status, find out their account balance, and get answers to billing or payment questions.

The main goal of this site is to provide high quality WordPress tutorials and other training resources to help people learn WordPress and improve their websites. You can even reduce the number of support tickets on your site by immediately solving problems through chat widgets. It adds a floating chat widget to your website and lets you choose a trigger for when the chatbox should be displayed.

Its responses are predetermined and not personalized, making it ill-suited to complex conversations. They’re all used to communicate with web users and offer them different things. Let’s take a look at each of them, to make sure everything is clear to you. While the term chatbot is used generically to describe a computer robot, there are three “main” categories.

This will open another rule where you can simply add the URL of the page where you want to hide the chatbot in the field on the right. Here, you have to choose where the chatbot widget will appear on your website. You will then be directed to your HubSpot account, where you will be creating the rest of your chatbot. Once you have provided your details and set a password for your new account, HubSpot will ask you about the industry that you work in.

best chatbot for wordpress

Chatra is a good option for businesses looking for a chatbot solution focused on sales and lead generation, with customizable templates and live visitor insights. However, businesses looking for more advanced AI capabilities, frequent updates, and new features might want to look elsewhere. Opting out for a full-fledged chatbot solution with a native WP plugin is probably the best decision in the long run.

Tidio even integrates with WooCommerce and allows your support team to see a customer’s cart, check their order history, and recommend other products in the chatbox. ChatBot comes with pre-made templates, a chatbot testing tool, a customizable chat widget, and integrations with platforms like Slack, Facebook, WhatsApp, and more. As we’ve seen in detail, there are different types of chatbots, and choosing a particular solution might come down to the features each offers.

The chat also integrates with HubSpot’s powerful CRM features so you can follow up with all your leads. If you want to add live chat functionality to your website, then we recommend using LiveChat, which is the best live chat solution for WordPress. In our expert opinion, LiveChat is the best WordPress chat plugin, especially for online stores, because of its comprehensive features and integration with WooCommerce. This can help you build an email list or communicate with your customers using SMS, email, or Slack. There was a time when handling a PDF file was straightforward—limited mostly to reading and perhaps minor editing.

best chatbot for wordpress

If it gets in trouble and can’t answer, it can ID the chat topic and send the person to a human support person. Botsonic is ideal for businesses looking to enhance customer support without needing to hire human support staff. Whether you’re a small business owner or part of a large enterprise, Botsonic can simplify your support system, making it more agile and customer-friendly. Adding chatbots to a website is one of the easiest ways to make it more engaging and helpful. And nowadays, creating, training, and rolling out a chatbot is easier than ever. We’ve sifted through the best WordPress chatbots for your websites, comparing their features and costs.

best chatbot for wordpress

Firstly, round-the-clock support enables customers to get answers to common questions anytime. Secondly, it enables the customers to register a query or complaint that chatbots are unable to solve. The chatbot detects user intent along with other customer details to provide agents with all the context https://chat.openai.com/ they need before the conversation even starts. Zendesk AI also helps organize and prioritize support tickets across both email and messages to reduce manual sorting. Provide instant responses to customer queries 24/7 and proactively message users with custom greetings to boost engagement.

You don’t need any coding knowledge or previous experience to use it. It is designed to detect intent and engage with the customer, rather than simply being intended to free up the time of your live chat agents. It allows you to communicate your clients by using web and mobile friendly chatbot, Facebook Messenger chatbot, and more. Chaty is a well-known WordPress plugin that offers a multi-channel communication platform. With this plugin, you can easily chat with your website visitors through Facebook, Messenger, Slack, Telegram, and more. GrooveHQ is the #1 top-rated help desk software used by big brands like AT&T, CloudApp, AppSumo, HubSpot, and more.

Check out our premium Live Chat Pro Max plugin to provide real time customer support. If you have no worries with your budget, choose any suitable WordPress chatbot according to your needs. In this plugin, you get building blocks that you can leverage to create chatbots according to your business.

  • With WordPress chat plugins, you can further improve the quality of the customer service on your site by helping users address their concerns over live chat and providing support.
  • Remember to look for functionalities that are important for your unique business needs.
  • However, with the Tidio+ package, individuals can harness sophisticated AI to create chatbots designed to minimize customer attrition and solve issues.
  • Once you are done, scroll down to the ‘Visitor information and behavior’ section.
  • A chatbot should provide information without clicks, and responses should be to the point.

This no-code chatbot plugin provides omnichannel support with integrations with WhatsApp, Telegram, Messenger, and of course, WordPress. It offers a video training library to walk users through their features, and also has a helpful YouTube channel for even more tips. There are times when the customer is willing to buy a product or service, but they want answers to some simple questions. However, if they aren’t able to find these answers quickly, they leave the website or store. If there is a customer service representative to answer these basic questions, it will increase the cost of a business. Easily set up your AI bot using their chatbot templates and start serving customers with AI-powered product suggestions, package tracking, FAQ answers, and more.

Many providers of WordPress chatbots provide a free version of their software. ArtiBot helps you get more leads, schedule appointments, and even collect payments. This WP-chatbot provides conversational flows with field validation to recognize numbers, dates, etc.

Chatbots are used in a variety of industries, including customer service, marketing, and sales. Chatbots can provide a number of benefits, including 24/7 availability, automated responses, and the ability to handle large volumes of requests. For developers and agencies, many clients want to add chatbots to their websites in the easiest way possible. WordPress developers have created a number of plugins that allow you to integrate chatbots into your website. Let’s take a look at some of the most widely used WordPress chatbot plugins available today. These chatbots work with third-party services, or internally on their own, to offer a chat experience for visitors of all types.

Zachary Paul
Zachary Paul is an independent investigative journalist living in New York City.

Anti-AI: Disruption Has Officially Disrupted Disruption


Disruptive technology has been so ingrained in tech culture that it’s been celebrated as a rallying cry for tech innovation for over 100 years now. But can we handle what our artificial intelligence engineers are offering us? Apparently not.

Apparently, being Anti-AI is the new cool. And if you aren’t on the “oh my god, artificial intelligence is going to take over the world and kill of us humans” train then you just don’t get it, according to some of the top leaders in the field.

In a dramatic recent development, Tesla CEO Elon Musk has called for a moratorium on the development of new artificial intelligence (AI) technologies. Musk argues that the rapid pace of AI development poses a threat to the safety and stability of modern society, saying that “we need to be super careful with AI”, and that “it’s a rare case where we should be proactive in regulation instead of reactive”.

Musk’s comments come amid growing concerns about the potential risks of advanced AI systems, which could theoretically be capable of unexpected and highly destructive behaviors. Some experts fear that AI could be programmed to cause harm to humans, or that it could ultimately outstrip human intelligence, leading to disastrous consequences. But, while some in the tech industry agree with Musk’s concerns, others have criticized his comments as alarmist and exaggerated. Some point to the potential benefits of AI, such as improved healthcare and more efficient manufacturing, and argue that any negative effects will be outweighed by the positive.

After everything that’s happened in the past 20 years alone, we have to admit that disruptive technology has its place in society. After all, disruptive technology completely transforms or revolutionizes the existing market, business model or industry. It often creates a new market and value network that eventually disrupts the existing market with its competitive advantages. The internet, mobile phones, and social media are some classic examples of disruptive technologies that are here to stay, regardless of their controversy and augmentations over the years.

With the emergence of Artificial Intelligence (AI), there have been many discussions about how it is going to disrupt various industries. From healthcare to manufacturing, AI is expected to streamline processes and make businesses more efficient. However, there is a growing movement of people who believe that AI is not the solution to all problems and, in fact, may cause more harm than good. This movement is known as Anti-AI, and it is gaining traction around the world.

Anti-AI proponents argue that the technology may not be able to achieve what it is being hyped as. They believe that the current hype is misplaced and that AI cannot simply disrupt every business model. The Anti-AI movement argues that humans hold unique skills that are still unmatched by robots, and that AI should not pose a threat to employment. They suggest that AI should be used as a tool to complement human skills, rather than replace them.

One area in which AI disruption has been questioned is the job market. According to the Anti-AI movement, AI technology has the potential to replace human workers in various industries, leading to a loss of jobs. Analysts predict that AI could lead to the loss of over 3 million jobs in the United States alone. This has raised concerns among Anti-AI advocates, who believe that AI should not lead to unemployment.

The Anti-AI movement has also expressed concerns about the ethical implications of AI. According to them, the unregulated use of AI technology can lead to various ethical issues. For example, autonomous vehicles could potentially harm human life if they malfunction, and biased algorithms could cause injustice in decision-making processes.

Another area of concern is data privacy. According to the Anti-AI movement, the data used by AI algorithms could potentially be misused or stolen. This could lead to security breaches and data loss, which would have significant implications.

The Anti-AI movement has also raised concerns about the exaggerated benefits of AI. They believe that many businesses and governments are overestimating the potential of AI to solve global problems. According to them, AI is not a magical solution that can instantly solve complex issues, such as climate change or poverty. Instead, they suggest that we should focus on developing sustainable solutions that take into account the ethical and social implications of AI technology.

While some may view the Anti-AI movement as reactionary or even luddistic, it is important to take their concerns seriously. AI technology is still in its early stages, and it is crucial that we consider the ethical and social implications of its development. By doing so, we can create a future where AI is used to complement human skills and create meaningful change, rather than simply causing disruption for its own sake.

One way to address the concerns raised by the Anti-AI movement is to develop regulations around AI. We need to ensure that AI technology is developed in a way that is safe, ethical, and responsible. Governments could also incentivize businesses to prioritize ethical and social considerations when developing AI technology, rather than simply focusing on profitability.

Another approach is to focus on developing AI that is transparent and accountable. We need to ensure that the decisions made by AI algorithms can be explained and that the data used is unbiased. This could involve creating open-source algorithms and data sets that are accessible to the public.

Finally, we need to prioritize education and training in AI technology. As AI evolves, it will become increasingly important for individuals to have a strong understanding of the technology and its ethical implications. We need to ensure that everyone has access to the education and training they need to participate in the development and implementation of AI technology.

In conclusion, while AI has the potential to be a powerful tool for solving some of the world’s most pressing problems, we must take the concerns of the Anti-AI movement seriously. By developing regulations, prioritizing transparency and accountability, and focusing on education and training, we can create a future where AI is used in a responsible and ethical way. Ultimately, this will help us create a fairer and more sustainable world for everyone.

Experts call for a Pause on AI Development


Technology pioneers, including Elon Musk and Steve Wozniak, have signed an open letter urging a “pause” on artificial intelligence (AI) development, citing concerns over the potential dangers it could pose.

The letter, released by the Future of Life Institute (FLI), a research organization aimed at mitigating existential risks facing humanity, highlights the risks presented by integrating AI into various industries, including warfare, cybersecurity, and transportation.

The letter states that the risks associated with AI are enormous and that there is a pressing need for the scientific community and policymakers to come together to discuss the implications of AI.

Leading AI developers have also signed the open letter, including Demis Hassabis of Google’s DeepMind, Stuart Russell of UC Berkeley, and Yoshua Bengio of the University of Montreal.

The FLI statement calls for more research into how AI can be designed to ensure it remains safe and offers benefits to society.

The letter acknowledges that AI has the potential to bring many benefits to humanity, including improving healthcare, education, and environmental sustainability. Still, the researchers argue that we need to take a more measured approach that ensures that the technology is developed in a way that avoids unintended consequences.

While AI has developed rapidly in recent years, experts warn that we have yet to realize its full potential, it is still subject to many unknowns.

One of the primary concerns is the possibility of AI systems acting unpredictably or developing biases. If left unchecked, these problems could have catastrophic consequences if AI is used in critical systems like medical equipment, transportation, or navigation systems.

The letter also notes the potential for hackers or malicious actors to exploit AI systems for their own gain, as some have already demonstrated with DeepFakes and other AI technologies.

The risks posed by AI could also extend to areas like autonomous vehicles, where the software controls the car’s actions. In the event of an accident, who would be held accountable? It is vital that we have clear regulations in place to ensure that developers are held responsible for any negative outcomes.

The researchers argue that we need to take a different approach to AI development, with a focus on ensuring that it remains transparent and explainable. This means that we must be able to understand how AI systems work and why they make specific decisions.

The letter concludes by calling upon researchers and policymakers alike to take a more measured approach to AI development, focusing on the risks as well as the benefits of the technology.

The FLI has been working on promoting the safe development of AI, with a focus on ensuring that the technology is designed in a way that protects human values and dignity.

The organization has been working with researchers in the field of AI, as well as policymakers, to promote safer practices for developing AI technologies.

In June 2021, the European Commission released its proposed regulations on AI aimed at setting legal guidelines for the development and use of AI in Europe.

The legislation focuses on creating a trustworthy and transparent framework that ensures that AI is used responsibly and in a manner that respects human rights and dignity.

The regulations would require companies to comply with safety, transparency, and accountability standards to ensure that AI is developed in the right way.

While there is a growing consensus that we need to take a more measured approach to AI development, there is no denying that the technology has the potential to bring many benefits to humanity.

Ultimately, the key to safe and effective AI development is to create a transparent and accountable framework that ensures that the technology is being used in a responsible and ethical manner.

It is crucial for policymakers and researchers to work together to overcome the risks associated with AI development and help bring about a more secure and positive future for humanity.

AI Chatbots Pose a Growing Threat of Disinformation


As the use of AI chatbots becomes more prevalent, concerns are growing about their potential to spread disinformation and manipulate public opinion.

While chatbots have been used for years to automate customer service and sales, they are now being employed for more nefarious purposes. Chatbots can be programmed to mimic human conversation and generate convincing text and audio, making them ideal for spreading propaganda, fake news, and other forms of disinformation.

Experts warn that chatbots could be used to create the impression of widespread public support for a particular candidate, policy, or viewpoint. By flooding social media with automated messages, chatbots can create the illusion of a groundswell of grassroots support, which can then be amplified by human users.

Chatbots are also being used to target vulnerable populations with false or misleading information, such as vaccine hesitancy or conspiracy theories. This can have serious consequences, as it can lead to decreased vaccine uptake and other harmful behaviors.

In addition to spreading disinformation, chatbots can also be used to amplify existing divisions within society. By targeting people with messages tailored to their existing beliefs and biases, chatbots can deepen existing political, social, and cultural fault lines, creating a more polarized and fractious society.

While AI chatbots are not inherently nefarious, experts say that their potential for harm must be taken seriously. To combat the spread of disinformation, social media companies and other platforms must take steps to detect and remove chatbots and other malicious actors. Additionally, education and media literacy efforts can help individuals better discern between real and fake information online.

As chatbot technology continues to advance, it is crucial that we stay vigilant about the potential for these tools to be used for malicious purposes. By taking proactive steps to address the threat of disinformation, we can help ensure that chatbots and other forms of AI are used for good, rather than for harm.

Why we don’t trust robots


Joffrey Becker, Collège de France

Robots raise all kinds of concerns. They could steal our jobs, as some experts think. And if artificial intelligence grows, they might even be tempted to enslave us, or to annihilate the whole of humanity. The Conversation

Robots are strange creatures, and not only for these frequently invoked reasons. We have good cause to be a little worried about these machines.

An advertisement for Kuka robotics: can these machines really replace us?

Imagine that you are visiting the Quai Branly-Jacques Chirac, a museum in Paris dedicated to anthropology and ethnology. As you walk through the collection, your curiosity leads you to a certain piece. After a while, you begin to sense a familiar presence heading towards the same objet d’art that has caught your attention.

You move slowly, and as you turn your head a strange feeling seizes you because what you seem to distinguish, still blurry in your peripheral vision, is a not-quite-human figure. Anxiety takes over.

As your head turns, and your vision become sharper, this feeling gets stronger. You realise that this is a humanoid machine, a robot called Berenson. Named after the American art critic Bernard Berenson and designed by the roboticist Philippe Gaussier (Image and Signal processing Lab) and the anthropologist Denis Vidal (Institut de recherche sur le développement), Berenson is part of an experiment underway at the Quai Branly museum since 2012.

The strangeness of the encounter with Berenson leaves you suddenly frightened, and you step back, away from the machine.

The uncanny valley

This feeling has been explored in robotics since the 1970s, when Japanese researcher Professor Masahiro Mori proposed his “uncanny valley” theory. If a robot resembles us, he suggested, we are inclined to consider its presence in the same way as we would that of a human being.

But when the machine reveals its robot nature to us, we will feel discomfort. Enter what Mori dubbed “the uncanny valley”. The robot will then be regarded as something of a zombie.

Mori’s theory cannot be systematically verified. But the feelings we experience when we meet an autonomous machine are certainly tinged with both incomprehension and curiosity.

The experiment conducted with Berenson at the Quai Branly, for example, shows that the robot’s presence can elicit paradoxical behaviour in museum goers. It underlines the deep ambiguity that characterises the relationship one can have with a robot, particularly the many communication problems they pose for humans.

If we are wary of such machines, it is mainly because it is not clear to us whether they have intentions. And, if so, what they are and how to establish a basis for the minimal understanding that is essential in any interaction. Thus, it is common to see visitors of the Quai Branly adopting social behaviour with Berenson, such as talking to it, or facing it, to find out how it perceives its environment.

In one way or another, visitors mainly try to establish contact. It appears that there is something strategic in considering the robot, even temporarily, as a person. And these social behaviours are not only observed when humans interact with machines that resembles us: it seems we make anthropomorphic projections whenever humans and robots meet.

Social interactions

An interdisciplinary team has recently been set up to explore the many dimensions revealed during these interactions. In particular, they are looking at the moments when, in our minds, we are ready to endow robots with intentions and intelligence.

This is how the PsyPhINe project was born. Based on interactions between humans and a robotic lamp, this project seeks to better understand people’s tendency to anthropomorphise machines.

After they get accustomed to the strangeness of the situation, it is not uncommon to observe that people are socially engaging with the lamp. During a game in which people are invited to play with this robot, they can be seen reacting to its movements and sometimes speaking to it, commenting on what it is doing or on the situation itself.

Mistrust often characterises the first moments of our relations with machines. Beyond their appearance, most people don’t know exactly what robots are made of, what their functions are and what their intentions might be. The robot world seems way too far from ours.

But this feeling quickly disappears. Assuming they have not already run away from the machine, people usually seek to define and maintain a frame for communication. Typically, they rely on existing communication habits, such as those used when talking to pets, for example, or with any living being whose world is to some degree different from theirs.

Ultimately, it seems, we humans are as suspicious of our technologies as we are fascinated by the possibilities they open up.

Joffrey Becker, Anthropologue, Laboratoire d’anthropologie sociale, Collège de France

This article was originally published on The Conversation. Read the original article.

We could soon face a robot crimewave … the law needs to be ready


Christopher Markou, University of Cambridge

This is where we are at in 2017: sophisticated algorithms are both predicting and helping to solve crimes committed by humans; predicting the outcome of court cases and human rights trials; and helping to do the work done by lawyers in those cases. By 2040, there is even a suggestion that sophisticated robots will be committing a good chunk of all the crime in the world. Just ask the toddler who was run over by a security robot at a California mall last year. The Conversation

How do we make sense of all this? Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.

Fear of Artificial Intelligence (AI) is a big theme. Technology can be a downright scary thing; particularly when its new, powerful, and comes with lots of question marks. But films like Terminator and shows like Westworld are more than just entertainment, they are a glimpse into the world we might inherit, or at least into how we are conceiving potential futures for ourselves.

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

Robot guilt

The role of the law can be defined in many ways, but ultimately it is a system within society for stabilising people’s expectations. If you get mugged, you expect the mugger to be charged with a crime and punished accordingly.

But the law also has expectations of us; we must comply with it to the fullest extent our consciences allow. As humans we can generally do that. We have the capacity to decide whether to speed or obey the speed limit – and so humans are considered by the law to be “legal persons”.

To varying extents, companies are endowed with legal personhood, too. It grants them certain economic and legal rights, but more importantly it also confers responsibilities on them. So, if Company X builds an autonomous machine, then that company has a corresponding legal duty.

The problem arises when the machines themselves can make decisions of their own accord. As impressive as intelligent assistants, Alexa, Siri or Cortana are, they fall far short of the threshold for legal personhood. But what happens when their more advanced descendants begin causing real harm?

A guilty AI mind?

The criminal law has two critical concepts. First, it contains the idea that liability for harm arises whenever harm has been or is likely to be caused by a certain act or omission.

Second, criminal law requires that an accused is culpable for their actions. This is known as a “guilty mind” or mens rea. The idea behind mens rea is to ensure that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.

Blind justice for a AI.
Shutterstock

So if an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law? How would a lawyer go about demonstrating the “guilty mind” of a non-human? Can this be done be referring to and adapting existing legal principles?

Take driverless cars. Cars drive on roads and there are regulatory frameworks in place to assure that there is a human behind the wheel (at least to some extent). However, once fully autonomous cars arrive there will need to be extensive adjustments to laws and regulations that account for the new types of interactions that will happen between human and machine on the road.

As AI technology evolves, it will eventually reach a state of sophistication that will allow it to bypass human control. As the bypassing of human control becomes more widespread, then the questions about harm, risk, fault and punishment will become more important. Film, television and literature may dwell on the most extreme examples of “robots gone awry” but the legal realities should not be left to Hollywood.

So can robots commit crime? In short: yes. If a robot kills someone, then it has committed a crime (actus reus), but technically only half a crime, as it would be far harder to determine mens rea. How do we know the robot intended to do what it did?

For now, we are nowhere near the level of building a fully sentient or “conscious” humanoid robot that looks, acts, talks, and thinks like us humans. But even a few short hops in AI research could produce an autonomous machine that could unleash all manner of legal mischief. Financial and discriminatory algorithmic mischief already abounds.

Play along with me; just imagine that a Terminator-calibre AI exists, and that it commits a crime (let’s say murder) then the task is not determining whether it in fact murdered someone; but the extent to which that act satisfies the principle of mens rea.

But what would we need to prove the existence of mens rea? Could we simply cross-examine the AI like we do a human defendant? Maybe, but we would need to go a bit deeper than that and examine the code that made the machine “tick”.

And what would “intent” look like in a machine mind? How would we go about proving an autonomous machine was justified in killing a human in self-defense or the extent of premeditation?

Let’s go even further. After all, we’re not only talking about violent crimes. Imagine a system that could randomly purchase things on the internet using your credit card – and it decided to buy contraband. This isn’t fiction; it has happened. Two London-based artists created a bot that purchased random items off the dark web. And what did it buy? Fake jeans, a baseball cap with a spy camera, a stash can, some Nikes, 200 cigarettes, a set of fire-brigade master keys, a counterfeit Louis Vuitton bag and ten ecstasy pills. Should these artists be liable for what the bot they created bought?

Maybe. But what if the bot “decided” to make the purchases itself?

Robo-jails?

Even if you solve these legal issues, you are still left with the question of punishment. What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm or miss its loved ones? Unless, of course, it was programmed to “reflect” on its wrongdoing and find a way to rewrite its own code while safely ensconced at Her Majesty’s leisure. And what would building “remorse” into machines say about us as their builders?

Would robot wardens patrol robot jails?
Shutterstock

What we are really talking about when we talk about whether or not robots can commit crimes is “emergence” – where a system does something novel and perhaps good but also unforeseeable, which is why it presents such a problem for law.

AI has already helped with emergent concepts in medicine, and we are learning things about the universe with AI systems that even an army of Stephen Hawkings might not reveal.

The hope for AI is that in trying to capture this safe and beneficial emergent behaviour, we can find a parallel solution for ensuring it does not manifest itself in illegal, unethical, or downright dangerous ways.

At present, however, we are systematically incapable of guaranteeing human rights on a global scale, so I can’t help but wonder how ready we are for the prospect of robot crime given that we already struggle mightily to contain that done by humans.

Christopher Markou, PhD Candidate, Faculty of Law, University of Cambridge

This article was originally published on The Conversation. Read the original article.

Merging our brains with machines won’t stop the rise of the robots


Michael Milford, Queensland University of Technology

Tesla chief executive and OpenAI founder Elon Musk suggested last week that humanity might stave off irrelevance from the rise of the machines by merging with the machines and becoming cyborgs. The Conversation

However, current trends in software-only artificial intelligence and deep learning technology raise serious doubts about the plausibility of this claim, especially in the long term. This doubt is not only due to hardware limitations; it is also to do with the role the human brain would play in the match-up.

Musk’s thesis is straightforward: that sufficiently advanced interfaces between brain and computer will enable humans to massively augment their capabilities by being better able to leverage technologies such as machine learning and deep learning.

But the exchange goes both ways. Brain-machine interfaces may help the performance of machine learning algorithms by having humans “fill in the gaps” for tasks that the algorithms are currently bad at, like making nuanced contextual decisions.

The idea in itself is not new. J. C. R. Licklider and others speculated on the possibility and implications of “man-computer symbiosis” in the mid-20th century.

However, progress has been slow. One reason is development of hardware. “There is a reason they call it hardware – it is hard,” said Tony Fadell, creator of the iPod. And creating hardware that interfaces with organic systems is even harder.

Current technologies are primitive compared to the picture of brain-machine interfaces we’re sold in science fiction movies such as The Matrix.

Deep learning quirks

Assuming that the hardware challenge is eventually solved, there are bigger problems at hand. The past decade of incredible advances in deep learning research has revealed that there are some fundamental challenges to be overcome.

The first is simply that we still struggle to understand and characterise exactly how these complex neural network systems function.

We trust simple technology like a calculator because we know it will always do precisely what we want it to do. Errors are almost always a result of mistaken entry by the fallible human.

One vision of brain-machine augmentation would be to make us superhuman at arithmetic. So instead of pulling out a calculator or smartphone, we could think of the calculation and receive the answer instantaneously from the “assistive” machine.

Where things get tricky is if we were to try and plug into the more advanced functions offered by machine learning techniques such as deep learning.

Let’s say you work in a security role at an airport and have a brain-machine augmentation that automatically scans the thousands of faces you see each day and alerts you to possible security risks.

Most machine learning systems suffer from an infamous problem whereby a tiny change in the appearance of a person or object can cause the system to catastrophically misclassify what it thinks it is looking at. Change a picture of a person by less than 1%, and the machine system might suddenly think it is looking at a bicycle.

This image shows how you can fool AI image recognition by adding imperceptible noise to the image.
From Goodfellow et al, 2014

Terrorists or criminals might exploit the different vulnerabilities of a machine to bypass security checks, a problem that already exists in online security. Humans, although limited in their own way, might not be vulnerable to such exploits.

Despite their reputation as being unemotional, machine learning technologies also suffer from bias in the same way that humans do, and can even exhibit racist behaviour if fed appropriate data. This unpredictability has major implications for how a human might plug into – and more importantly, trust – a machine.

Google research scientist, Ian Goodfellow, shows how easy it is to fool a deep learning system.

Trust me, I’m a robot

Trust is also a two-way street. Human thought is a complex, highly dynamic activity. In this same security scenario, with a sufficiently advanced brain-machine interface, how will the machine know what human biases to ignore? After all, unconscious bias is a challenge everyone faces. What if the technology is helping you interview job candidates?

We can preview to some extent the issues of trust in a brain-machine interface by looking at how defence forces around the world are trying to address human-machine trust in an increasingly mixed human-autonomous systems battlefield.

Research into trusted autonomous systems deals with both humans trusting machines and machines trusting humans.

There is a parallel between a robot warrior making an ethical decision to ignore an unlawful order by a human and what must happen in a brain-machine interface: interpretation of the human’s thoughts by the machine, while filtering fleeting thoughts and deeper unconscious biases.

In defence scenarios, the logical role for a human brain is in checking that decisions are ethical. But how will this work when the human brain is plugged into a machine that can make inferences using data at a scale that no brain can comprehend?

In the long term, the issue is whether, and how, humans will need to be involved in processes that are increasingly determined by machines. Soon machines may make medical decisions no human team can possibly fathom. What role can and should the human brain play in this process?

In some cases, the combination of automation and human workers could increase jobs, but this effect is likely fleeting. Those same robots and automation systems will continue to improve, likely eventually removing the jobs they created locally.

Likewise, while humans may initially play a “useful” role in brain-machine systems, as the technology continues to improve there may be less reason to include humans in the loop at all.

The idea of maintaining humanity’s relevance by integrating human brains with artificial brains is appealing. What remains to be seen is what contribution the human brain will make, especially as technology development outpaces human brain development by a million to one.

Michael Milford, Associate professor, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Beyond Asimov: how to plan for ethical robots


As robots become integrated into society more widely, we need to be sure they’ll behave well among us. In 1942, science fiction writer Isaac Asimov attempted to lay out a philosophical and moral framework for ensuring robots serve humanity, and guarding against their becoming destructive overlords. This effort resulted in what became known as Asimov’s Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Today, more than 70 years after Asimov’s first attempt, we have much more experience with robots, including having them drive us around, at least under good conditions. We are approaching the time when robots in our daily lives will be making decisions about how to act. Are Asimov’s Three Laws good enough to guide robot behavior in our society, or should we find ways to improve on them?

Asimov knew they weren’t perfect


Rowena Morrill/GFDL, CC BY-SA

Asimov’s “I, Robot” stories explore a number of unintended consequences and downright failures of the Three Laws. In these early stories, the Three Laws are treated as forces with varying strengths, which can have unintended equilibrium behaviors, as in the stories “Runaround” and “Catch that Rabbit,” requiring human ingenuity to resolve. In the story “Liar!,” a telepathic robot, motivated by the First Law, tells humans what they want to hear, failing to foresee the greater harm that will result when the truth comes out. The robopsychologist Susan Calvin forces it to confront this dilemma, destroying its positronic brain.

In “Escape!,” Susan Calvin depresses the strength of the First Law enough to allow a super-intelligent robot to design a faster-than-light interstellar transportation method, even though it causes the deaths (but only temporarily!) of human pilots. In “The Evitable Conflict,” the machines that control the world’s economy interpret the First Law as protecting all humanity, not just individual human beings. This foreshadows Asimov’s later introduction of the “Zeroth Law” that can supersede the original three, potentially allowing a robot to harm a human being for humanity’s greater good.

0. A robot may not harm humanity or, through inaction, allow humanity to come to harm.

Asimov’s laws are in a particular order, for good reason.
Randall Munroe/xkcd, CC BY-NC

Robots without ethics

It is reasonable to fear that, without ethical constraints, robots (or other artificial intelligences) could do great harm, perhaps to the entire human race, even by simply following their human-given instructions.

The 1991 movie “Terminator 2: Judgment Day” begins with a well-known science fiction scenario: an AI system called Skynet starts a nuclear war and almost destroys the human race. Deploying Skynet was a rational decision (it had a “perfect operational record”). Skynet “begins to learn at a geometric rate,” scaring its creators, who try to shut it down. Skynet fights back (as a critical defense system, it was undoubtedly programmed to defend itself). Skynet finds an unexpected solution to its problem (through creative problem solving, unconstrained by common sense or morality).

Catastrophe results from giving too much power to artificial intelligence.

Less apocalyptic real-world examples of out-of-control AI have actually taken place. High-speed automated trading systems have responded to unusual conditions in the stock market, creating a positive feedback cycle resulting in a “flash crash.” Fortunately, only billions of dollars were lost, rather than billions of lives, but the computer systems involved have little or no understanding of the difference.

Toward defining robot ethics

While no simple fixed set of mechanical rules will ensure ethical behavior, we can make some observations about properties that a moral and ethical system should have in order to allow autonomous agents (people, robots or whatever) to live well together. Many of these elements are already expected of human beings.

These properties are inspired by a number of sources including
the Engineering and Physical Sciences Research Council (EPSRC) Principles of Robotics and
recent work on the cognitive science of morality and ethics focused on
neuroscience,
social psychology,
developmental psychology and
philosophy.

The EPSRC takes the position that robots are simply tools, for which humans must take responsibility. At the extreme other end of the spectrum is the concern that super-intelligent, super-powerful robots could suddenly emerge and control the destiny of the human race, for better or for worse. The following list defines a middle ground, describing how future intelligent robots should learn, like children do, how to behave according to the standards of our society.

  • If robots (and other AIs) increasingly participate in our society, then they will need to follow moral and ethical rules much as people
    do. Some rules are embodied in laws against killing, stealing, lying and driving on the wrong side of the street. Others are less formal but nonetheless important, like being helpful and cooperative when the opportunity arises.
  • Some situations require a quick moral judgment and response – for example, a child running into traffic or the opportunity to pocket a dropped wallet. Simple rules can provide automatic real-time response, when there is no time for deliberation and a cost-benefit analysis. (Someday, robots may reach human-level intelligence while operating far faster than human thought, allowing careful deliberation in milliseconds, but that day has not yet arrived, and it may be far in the future.)
  • A quick response may not always be the right one, which may be recognized after feedback from others or careful personal reflection. Therefore, the agent must be able to learn from experience including feedback and deliberation, resulting in new and improved rules.
  • To benefit from feedback from others in society, the robot must be able to explain and justify its decisions about ethical actions, and to understand explanations and critiques from others.
  • Given that an artificial intelligence learns from its mistakes, we must be very cautious about how much power we give it. We humans must ensure that it has experienced a sufficient range of situations and has satisfied us with its responses, earning our trust. The critical mistake humans made with Skynet in “Terminator 2” was handing over control of the nuclear arsenal.
  • Trust, and trustworthiness, must be earned by the robot. Trust is earned slowly, through extensive experience, but can be lost quickly, through a single bad decision.
  • As with a human, any time a robot acts, the selection of that action in that situation sends a signal to the rest of society about how that agent makes decisions, and therefore how trustworthy it is.
  • A robot mind is software, which can be backed up, restored if the original is damaged or destroyed, or duplicated in another body. If robots of a certain kind are exact duplicates of each other, then trust may not need to be earned individually. Trust earned (or lost) by one robot could be shared by other robots of the same kind.
  • Behaving morally and well toward others is not the same as taking moral responsibility. Only competent adult humans can take full responsibility for their actions, but we expect children, animals, corporations, and robots to behave well to the best of their abilities.

Human morality and ethics are learned by children over years, but the nature of morality and ethics itself varies with the society and evolves over decades and centuries. No simple fixed set of moral rules, whether Asimov’s Three Laws or the Ten Commandments, can be adequate guidance for humans or robots in our complex society and world. Through observations like the ones above, we are beginning to understand the complex feedback-driven learning process that leads to morality.

The Conversation

Benjamin Kuipers, Professor of Computer Science and Engineering, University of Michigan

This article was originally published on The Conversation. Read the original article.

White House launches public workshops on AI issues


The White House today announced a series of public workshops on artificial intelligence (AI) and the creation of an interagency working group to learn more about the benefits and risks of artificial intelligence. The first workshop Artificial Intelligence: Law and Policy will take place on May 24 at the University of Washington School of Law, cohosted by the White House and UW’s Tech Policy Lab. The event places leading artificial intelligence experts from academia and industry in conversation with government officials interested in developing a wise and effective policy framework for this increasingly important technology.

Speakers include:

The final workshop will be held on July 7th at the Skirball Center for the Performing Arts, New York. The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term will address the near-term impacts of AI technologies across social and economic systems. The event is hosted by the White House and New York University’s Information Law Institute, with support from Google Open Research and Microsoft Research.

The focus will be the challenges of the next 5-10 years, specifically addressing five themes: social inequality, labor, financial markets, healthcare, and ethics. Leaders from industry, academia, and civil society will share ideas for technical design, research and policy directions.

You can learn more about these events via the links to the event websites below, and each workshop will be livestreamed:

According to Ed Felton, Deputy U.S. Chief Technology Officer, “There is a lot of excitement about artificial intelligence (AI) and how to create computers capable of intelligent behavior. After years of steady but slow progress on making computers “smarter” at everyday tasks, a series of breakthroughs in the research community and industry have recently spurred momentum and investment in the development of this field.

Today’s AI is confined to narrow, specific tasks, and isn’t anything like the general, adaptable intelligence that humans exhibit. Despite this, AI’s influence on the world is growing. The rate of progress we have seen will have broad implications for fields ranging from healthcare to image- and voice-recognition. In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.

In education, AI has the potential to help teachers customize instruction for each student’s needs. And, of course, AI plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.

Like any transformative technology, however, artificial intelligence carries some risk and presents complex policy challenges along several dimensions, from jobs and the economy to safety and regulatory questions. For example, AI will create new jobs while phasing out some old ones—magnifying the importance of programs like TechHire that are preparing our workforce with the skills to get ahead in today’s economy, and tomorrow’s. AI systems can also behave in surprising ways, and we’re increasingly relying on AI to advise decisions and operate physical and virtual machinery—adding to the challenge of predicting and controlling how complex technologies will behave.

There are tremendous opportunities and an array of considerations across the Federal Government in privacy, security, regulation, law, and research and development to be taken into account when effectively integrating this technology into both government and private-sector activities.

That is why the White House Office of Science and Technology Policy is excited to announce that we will be co-hosting four public workshops over the coming months on topics in AI to spur public dialogue on artificial intelligence and machine learning and identify challenges and opportunities related to this emerging technology. These four workshops will be co-hosted by academic and non-profit organizations, and two of them will also be co-hosted by the National Economic Council. These workshops will feed into the development of a public report later this year. We invite anyone interested to learn more about this emergent field of technology and give input about future directions and areas of challenge and opportunity.

The Federal Government also is working to leverage AI for public good and toward a more effective government. A new National Science and Technology Council (NSTC)Subcommittee on Machine Learning and Artificial Intelligence will meet for the first time next week. This group will monitor state-of-the-art advances and technology milestones in artificial intelligence and machine learning within the Federal Government, in the private sector, and internationally; and help coordinate Federal activity in this space.

Broadly, between now and the end of the Administration, the NSTC group will work to increase the use of AI and machine learning to improve the delivery of government services. Such efforts may include empowering Federal departments and agencies to run pilot projects evaluating new AI-driven approaches and government investment in research on how to use AI to make government services more effective. Applications in AI to areas of government that are not traditionally technology-focused are especially significant; there is tremendous potential in AI-driven improvements to programs and delivery of services that help make everyday life better for Americans in areas related to urban systems and smart cities, mental and physical health, social welfare, criminal justice, the environment, and much more.

We look forward to engaging with the public about how best to harness the opportunities brought by artificial intelligence. Stay tuned for more information about the work we’re doing on this subject as it develops over the coming months.”

Ed Felten is a Deputy U.S. Chief Technology Officer.

Looking for art in artificial intelligence


Algorithms help us to choose which films to watch, which music to stream and which literature to read. But what if algorithms went beyond their jobs as mediators of human culture and started to create culture themselves?

In 1950 English mathematician and computer scientist Alan Turing published a paper, “Computing Machinery and Intelligence,” which starts off by proposing a thought experiment that he called the “Imitation Game.” In one room is a human “interrogator” and in another room a man and a woman. The goal of the game is for the interrogator to figure out which of the unknown hidden interlocutors is the man and which is the woman. This is to be accomplished by asking a sequence of questions with responses communicated either by a third party or typed out and sent back. “Winning” the Imitation Game means getting the identification right on the first shot.

Alan Turing.
Stephen Kettle sculpture; photo by Jon Callas, CC BY

Turing then modifies the game by replacing one interlocutor with a computer, and asks whether a computer will be able to converse sufficiently well that the interrogator cannot tell the difference between it and the human. This version of the Imitation Game has come to be known as the “Turing Test.”

Turing’s simple, but powerful, thought experiment gives a very general framework for testing many different aspects of the human-machine boundary, of which conversation is but a single example.

On May 18 at Dartmouth, we will explore a different area of intelligence, taking up the question of distinguishing machine-generated art. Specifically, in our “Turing Tests in the Creative Arts,” we ask if machines are capable of generating sonnets, short stories, or dance music that is indistinguishable from human-generated works, though perhaps not yet so advanced as Shakespeare, O. Henry or Daft Punk.

Conducting the tests

The dance music competition (“Algorhythms”) requires participants to construct an enjoyable (fun, cool, rad, choose your favorite modifier for having an excellent time on the dance floor) dance set from a predefined library of dance music. In this case the initial random “seed” is a single track from the database. The software package should be able to use this as inspiration to create a 15-minute set, mixing and modifying choices from the library, which includes standard annotations of more than 20 features, such as genre, tempo (bpm), beat locations, chroma (pitch) and brightness (timbre).

Can a computer write a better sonnet than this man?
Martin Droeshout (1623)

In what might seem a stiffer challenge, the sonnet and short story competitions (“PoeTix” and “DigiLit,” respectively) require participants to submit self-contained software packages that upon the “seed” or input of a (common) noun phrase (such as “dog” or “cheese grater”) are able to generate the desired literary output. Moreover, the code should ideally be able to generate an infinite number of different works from a single given prompt.

To perform the test, we will screen the computer-made entries to eliminate obvious machine-made creations. We’ll mix human-generated work with the rest, and ask a panel of judges to say whether they think each entry is human- or machine-generated. For the dance music competition, scoring will be left to a group of students, dancing to both human- and machine-generated music sets. A “winning” entry will be one that is statistically indistinguishable from the human-generated work.

The competitions are open to any and all comers. To date, entrants include academics as well as nonacademics. As best we can tell, no companies have officially thrown their hats into the ring. This is somewhat of a surprise to us, as in the literary realm companies are already springing up around machine generation of more formulaic kinds of “literature,” such as earnings reports and sports summaries, and there is of course a good deal of AI automation around streaming music playlists, most famously Pandora.

Judging the differences

Evaluation of the entries will not be entirely straightforward. Even in the initial Imitation Game, the question was whether conversing with men and women over time would reveal their gender differences. (It’s striking that this question was posed by a closeted gay man.) The Turing Test, similarly, asks whether the machine’s conversation reveals its lack of humanity not in any single interaction but in many over time.

It’s also worth considering the context of the test/game. Is the probability of winning the Imitation Game independent of time, culture and social class? Arguably, as we in the West approach a time of more fluid definitions of gender, that original Imitation Game would be more difficult to win. Similarly, what of the Turing Test? In the 21st century, our communications are increasingly with machines (whether we like it or not). Texting and messaging have dramatically changed the form and expectations of our communications. For example, abbreviations, misspellings and dropped words are now almost the norm. The same considerations apply to art forms as well.

Who is the artist?

Who is the creator – human or machine? Or both?
Hands image via shutterstock.com

Thinking about art forms leads naturally to another question: who is the artist? Is the person who writes the computer code that creates sonnets a poet? Is the programmer of an algorithm to generate short stories a writer? Is the coder of a music-mixing machine a DJ?

Where is the divide between the artist and the computational assistant and how does the drawing of this line affect the classification of the output? The sonnet form was constructed as a high-level algorithm for creative work – though one that’s executed by humans. Today, when the Microsoft Office Assistant “corrects” your grammar or “questions” your word choice and you adapt to it (either happily or out of sheer laziness), is the creative work still “yours” or is it now a human-machine collaborative work?

We’re looking forward to seeing what our programming artists submit. Regardless of their performance on “the test,” their body of work will continue to expand the horizon of creativity and machine-human coevolution.

The Conversation

Michael Casey, James Wright Professor of Music, Professor of Computer Science, Dartmouth College and Daniel N. Rockmore, Professor, Department of Mathematics, Computational Science, and Computer Science, Dartmouth College

This article was originally published on The Conversation. Read the original article.