Fairfield University Hosts Panel on Ethics and AI

Fairfield University Hosts Panel on Ethics and AI

On Monday, the Dolan School of Business at Fairfield University hosted a virtual panel on the ethics of artificial intelligence applications in commerce.

The panel included Jacob Alber, principal software engineer at Microsoft Research, Iosif Gershteyn, CEO of pharmaceutical manufacturing company ImmuVia, and Philip Maymin, director of the Business Analytics program at Fairfield University.

“Thank you all for joining us today as we discuss perhaps the most important technology ethics shaping our changing world today, namely artificial intelligence,” said Gershteyn to viewers.

In a question-and-answer format, panelists engaged in an hour-long debate on a variety of topics ranging from recognizing biases in AI to the possibility that code could become sensitive.

This conversation has been edited and condensed for clarity.

How would we know if a piece of code has become sensitive?

Maymin: It will be a social decision, right? Ultimately, we decide, as a society and as a legal system, what constitutes capacity. The definition of a person has changed many times over hundreds of thousands of years. That changed this year. The idea of ​​who has rights, who has capacity and who is a minor versus an adult has changed many times. Presumably, there will first be an AI that we should treat as a minor before treating it as an adult. So there will be a number of rights that go along with that.

Gerstein: I believe code cannot be smart. By definition, intelligence requires understanding. The mechanisms are not comprehensive. When you write a book, that book only exists when a human reads it. Code is neither intelligent nor sentient – it can only appear to be intelligent and sentient beings.

Maymin: The counter argument I would make is that a strand of DNA is a very simple kind of book or code. You put all your mechanisms around him, and suddenly you have a living, breathing human who can say things that no one on earth has ever thought of. I don’t think it’s so crazy to think that code running on another mechanism could, in fact, also exhibit the same kind of intelligence.

Gerstein: Well, in fact, there has never been a successful creation of life from non-life. And all the synthetic biology we work on always starts with a life base. Even if you create artificial DNA, you still have to put it into a plasmid, etc. So even there, I strongly believe that intelligence is a property of life and a secondary property of consciousness.

Albert: It’s interesting that you find this kind of separation between sensitivity and intelligence. This raises a few questions. Can you have Sapiens without sentience? Can we have intelligence without conscience? And if you can’t, how do you determine that something is conscious and has an internal subjective process? Our current test of intelligence at the human level, the Turing test, has a very big flaw. GPT is a perfect illustration of this flaw – it will be perfectly happy to write the phrase, “a herd of files flew under the tarmac”. But most people wouldn’t interpret that as a sensible text.

To take the position of devil’s advocate, from a scientific standpoint, I have no reason to argue that I currently need additional ingredients to generate the qualia we see in humans, animals , etc. So to that extent it doesn’t seem unreasonable to say that code can be alive and can have a subjective experience. But for us to believe that, we need to have a much better understanding of what causes us to have subjective experience.

Should biased AI be stopped or overwritten in special cases?

Maymin: It’s a complicated question. Let’s try to think about it from the other side – what if an AI discovers a bias based on protected class information? You know, race, ethnicity, gender, age, religion, whatever. Suppose an IA discovers that historically oppressed minorities repay loans better, so she wants to offer them better rates. Should we prevent this in the name of reducing bias? Or is bias reduction just about making sure it doesn’t harm some people, but it’s okay if it benefits them?

Albert: Much of this question should be informed by the specific ethics of the field in which you are applying AI. There are several schools of thought on whether or not it is necessary to consult data correlated to protected information. You may actually end up creating more bias if you ignore this information. If you included them and attempted to use them as controls to ensure that, say, your dataset is representative and proportional, you’ll end up with a better classifier in the end. So you probably want to collect this data, oddly enough, but you want to show that your decision was not influenced by it in the statistics view.

Maymin: It’s an interesting irony, isn’t it? In order to try to reduce bias, we actually have to ask probing, personal, uncomfortable, and ignorant questions.

But if the AI ​​finds relationships between inputs such as ethnicity or gender, it can be complicated. You could detect very arbitrary relationships that, if we knew what they were, we would close them. People may ask, “How dare you look at this information? Sure, it wasn’t on the forbidden information list, but any human would have known not to think of things that way. And I don’t know if there is a way to protect that.

Albert: There are a number of good toolkits that allow you to query models and extract causal relationships between your input data and your output data. Not to honk too much, but our lab at Microsoft Research is working on a toolkit called Fairlearn that I urge people to take a look at to help them understand what kind of bias they are including in their models.

That said, however, you have to remember that the purpose of the AI ​​is to find the right bias for your model. When you start, your model is initialized randomly to some degree. It won’t be balanced or fair unless you specifically design a tool to be uniform across your possible output space. Your goal is to find the correct bias of it so that it gives you the answers you want.

Gerstein: There is huge confusion here between definitions of bias. In a mathematical sense, bias is a deviation from reality, and the whole point of the algorithm is to minimize this bias to conform as accurately as possible to the data set. While the legal definition of bias is whether or not it has predictive value, certain categories should be excluded from decision making. Thus, in the end, circumventing or stopping AI comes down to the moral choice of a human agent, who notably bears the legal responsibility.

Are privacy disclosures that no one reads ethically sufficient?

Maymin: You’re right – nobody reads them and nobody gets excited about them. Even some people who write them aren’t excited about them. And yet, from a business perspective, they have to protect themselves because people will sue them otherwise. This extends not only to privacy disclosures, but also to terms and conditions.

But it can get quite exciting if you recognize that there is a real market opportunity here. Imagine a company whose job it is to make privacy disclosures easier for me to understand. I’m happy to pay a dollar to have someone else read them to me and tell me if there’s anything I need to worry about. It doesn’t have to be a human being. It could be an AI that collects all the privacy disclosures, reads them, marks them, and when they change, all it has to do is compare the new version to the old one and show me the differences. I can feed them into OpenAI’s text predictor and say, “what do I need to worry about in terms of privacy disclosures?” This is a service I would pay for. Is not it?

Albert: The idea of ​​feeding an AI privacy policy and having it say what’s most important creates a chicken-and-egg problem. Each of us has different values ​​and places different levels of importance on various privacy issues. For example, maybe I don’t view my age as particularly private when I’m online and am reasonably comfortable giving it away, but maybe I’m a little more wary of giving my location or my religious or ethnic affiliation. So unless each person is creating their own custom AI to analyze privacy policies, you’ll need a custom model for each person. And once you do that, you collect their data to generate that custom model. You can set it up so that the data never leaves the sovereignty of the user, but at the end of the day, I really don’t think it makes sense to put so much effort into training an AI to do that .

So I think we as an industry can do a lot to create easy-to-read outlines of policies. And if we think differing is a useful tool, then we should think about how we can standardize the representation of these policies so that users can say, “I want to compare Brand A’s privacy policy with Brand A’s. brand B”. So, to use a metaphor I’ve heard, your AI products should have nutrition labels telling the user what data they’re collecting for what purpose. This is the dream. I believe in the ability of humanity to do this.

Gerstein: But I think the legalese is actually the biggest problem. Philip mentioned capacity as one of the fundamental requirements for a contract to be valid. Capacity is so uneven between the consumer and the group of lawyers who write privacy policies that, due to their complexity and length, no one reads them. Contracts need to be understood by all parties and nutrition labels are something that moves away from legalese and gives you a clear and fair picture of the information you are giving. That’s the way to go, and that’s really what needs to happen. But unfortunately, there are all the incentives against it.


#Fairfield #University #Hosts #Panel #Ethics

Leave a Comment

Your email address will not be published.