Artificial selection

Universal design and artificial intelligence

An elderly couple that I know recently sold their house and became a tenant. They thought they sold their house for a good price, but I’m not so sure. I’m pretty sure they got scammed by a good selling point from one of those home buying companies that are buying so many homes. They were homeowners, paying off their mortgage, and profiting from the growing value of their home. Now they are tenants, helping to pay off their landlord’s mortgage and increasing their equity.

It all made me think of universal design and artificial intelligence (AI). Universal design is the discipline of designing products and services for as many people as possible. It was born out of a desire to meet the needs of people with disabilities. The idea is that instead of confining people with disabilities to specially designed products, they should be able to use consumer products. Universal design was driven by the generally poor experience people with disabilities have with products designed especially for them. They were too often of inferior quality and had limited availability and capacity compared to their traditional counterparts. Thus, the idea was born that with only a little extra effort, consumer products could be usable by people with disabilities.

According to the Center of Excellence in Universal Design (CEUD),

Universal design is the design and composition of an environment so that it can be viewed, understood and used to the greatest extent possible by all people, regardless of their age, size, ability or disability. An environment (or any building, product, or service within that environment) should be designed to meet the needs of all the people who want to use it. This is not a specific requirement, benefiting only a minority of the population. This is a fundamental condition for good design.

City of CEUD website

What does universal design mean for AI systems?

AI systems are triggering a renewed need to address cognitive impairments and limitations. Universal design has always included cognitive disabilities, but the focus has been on physical disabilities. There are several reasons for this. First, we know more about designing for physical disabilities. Generally, if each user input or output is redundant, meaning that it can be given or received in more than one way, then the design will be accessible. If someone can’t do something one way, they have the option of doing it a different way. If they can’t type, they can say the commands. If they cannot read the screen, a screen reader will read the screen to them. Because we know more about how to design physically accessible systems, this aspect of accessibility is more important.

Another reason accessibility focuses on physical disabilities is that many of the measures taken for physical disabilities also help people with cognitive impairments. Having redundant inputs and outputs means that a person with a cognitive disability receives messages in more than one way, increasing the chances that they will understand them. Having multiple ways of controlling a device allows people to find the way that works best for them. Good physically accessible design, in general, also contributes to cognitive accessibility.

These AI-enabled systems that are designed to get us to do things create the need to approach cognitive accessibility in new ways. Especially with the emerging metaverse, these systems are designed to get as many of us as possible to buy a product, subscribe to a service, vote for a particular candidate or party, or take any other action desired by the developers of this AI. I suspect that my elderly friends who sold their house were convinced to do so by an AI-enabled system. We’ve always had confident ploys and manipulators preying on the most vulnerable among us, but now their efforts are aided and amplified by systems powered by AI. It is and will increasingly be possible for them to create a reality in which the only reasonable choice is to do what they want us to do. Is this the kind of future society we want to live in?

Our inclusive society: a legal overview

Our society has always chosen to include the greatest possible percentage of the population. This can be seen in the series of laws supporting the needs of people with disabilities. Perhaps the most famous accessibility law is the Americans with Disabilities Act (ADA) of 1990. Before that, there was the Hearing Aid Compatibility Act 1988. In 1996, ADA’s accessibility to telecommunications was addressed in section 255 of the Communications Act. Subsequently, the need for accessibility to information technology was addressed and made mandatory for all US federal agencies. A review of accessibility laws reveals that approximately every two years, steps are taken to ensure accessibility to part of our society. These actions have been promoted by both political parties and often receive bipartisan support. These laws represent a social consensus. Most of us want to live in as inclusive a society as possible.

AI ethics

What ethics should guide us, as a society, as AI systems are increasingly able to manipulate us into making decisions that are not in our best interests? I argued that we need to develop an AI theology. As I use that term here, theology is the foundation from which our enduring ethics emerge. In my opinion, and I think in the opinion of most people, all of humanity is united by common bonds. Because of these shared links, all people should be respected and protected. We are each other’s keepers. We have a responsibility to create a fair society for all of us.

As AI systems become more sophisticated, we will need to expand our societal standards to meet their new capabilities. As virtual reality allows AI systems to manipulate us with alternate realities, we will need guarantees. Perhaps the first ethical premise should be this: It is wrong to take advantage of someone with a cognitive disability. Just because we can do something doesn’t make it right. For my part, I do not want to live in a society where the elderly are driven from their homes. I expect a lot of people to share this opinion. Just because a sales system can convincing people to make a decision that will seriously harm their future, does not mean such a system should be allowed to do so.

Mapping the ethical use of AI systems will be difficult work. Even more complex will be the development of mechanisms to implement and enforce these ethical constraints. The last thing we want are AI systems that prey on the weak and the vulnerable. Those who develop these systems must be guided in their work by an ethical credo appropriate to the technology. Our goal should be to create a future in which AI systems serve us all well, including those who are weak or particularly vulnerable.