top of page
  • GBC

Spotlight on David Epstein

We asked David Epstein, Executive Director, Susilo Institute for Ethics in the Global Economy

Questrom School of Business, Boston University, and a featured speaker at The Global Brand Convergence®, questions we received about the ethics of AI and trends he is seeing.

A long with teaching ethics at Boston University, he has taught ethics in his finance and

entrepreneurship classes at the University of San Francisco, University of California, Berkeley, Menlo College, and other universities. Recently, Dave completed a year at Stanford University as a Distinguished Careers Institute Fellow. As an advisor and investor at Epstein Advisors Dave continues to be active in the entrepreneurial world. He interests include clean tech, semiconductors, health care, and socially responsible enterprises.

Here are his insights!

Q. What is the most frequent question you get regarding artificial intelligence

and its ethical implications?

A. Many people wonder if the AI craze is more disruptive than other technological tools that we have become accustomed to getting on a continuous basis. So, they ask if it is really going to replace jobs, or will it make us more productive and new jobs are going to fill that void.  The ethical question embedded here is whether we are doing good for society or harm.  As with any ethical questions, there is not a clear answer.  First, I do believe this is much more than a tool, and although it has evolved over time to reach this point of usefulness, it has hit a tipping point.  In so many areas of our work lives, AI is making us more productive, that is a given.  When trying to use it for writing an article, a program, or planning an event, we quickly see its power and its ability to get things done in record time.  Now the question is do we need more articles, programs, and events.  It is not clear whether work will expand at the rate of job replacement.  If it doesn't, then the next question is who should be responsible for retraining our workforce to keep employment up, or who should pay for the social net that may be required to keep people out of poverty.

Q. In what ways has AI changed the ethical construct around the classroom/academic environment?

A. All schools and universities are struggling with this one. On the one hand, we, as educators, must train our graduates to use AI so that they are productive at work. On the other hand, we don't want AI to DO their work for them, as that would make them useless in the workforce.  Some are no longer assigning essays as homework, others have resorted to oral exams, while some just teach as they always have and leave it up to the students to realize that just letting AI do the work for them does not prepare them for the future.  Educators and education will change, and maybe in a very large way.  Universities must adapt to the new paradigm with more thoughtful assignments, students must take some responsibility for their own education, and we should all expect to see AI teachers that will begin to displace teachers themselves, at least for basic courses, and eventually for more

complex ones.

Q. Similarly, AI has become a huge topic in the boardroom. What are some of the

ethical implications that you see in terms of organizations and how they’re grappling with


A. Boards will demand that AI is employed to make corporations more efficient and

thereby more profitable with wide use of AI. The ethical considerations there will be what to

do about layoffs or the hiring freeze that will result. Unions already are having something to

say about this with the recent writers' and actors' strike in Hollywood.  

Another area is the wide use of AI might expose company secrets, or, on the reverse, may

encourage workers to use proprietary material from other companies, intentionally or not.

There will have to be strict rules on which AI to use, and which not to, but this will be nearly

impossible to police, especially for the larger companies.  Additionally, liability resulting from

relying on AI for guidance or solutions that may be wrong, biased, or harmful to their customers or community, is still unchartered territory that courts will be deciding.  This is a minefield for corporations and must be considered going forward. There are new companies and consultants popping up that help corporations navigate some of these waters consulting on responsible AI.  Lastly, all companies must have people responsible to following regulations and guidance from new laws like the ones from Europe, China and the latest Executive Orders just introduced in the US.

Q. How do you advise your students colleagues how to apply ethics when making

decisions regarding the use of AI and its implications. Especially when workers are

displaced, processes are changed, and relationships can be compromised. A. We advise in our MBA classes, students and workers should apply a framework to review solutions and work products to consider and anticipate direct and unintended consequences that will result.  There is no shortcut to understanding the impact on all stakeholders of a product or solution, including your customers, our corporation, employees, partners, and the community.  Understand your values and those of the company and community you work in and compare the consequences you foresee with the values you and your organization hold.  Keep your values in mind, and you are less likely to make a terrible mistake.  As Boston University's behavioral economics professor Nina Mazar wisely says, "Don't ask what I should do now, but what should I do if I stay true to my values."

Q. In what ways has your job as an ethicist changed because of AI?

A. It will keep me gainfully employed!

Join David on November 29 8 AM ET and our other featured speakers and performing

artists at the Global Brand Convergence®. We look forward to seeing you there.

30 views0 comments

Recent Posts

See All


Updated Logo_Social with Reg.png
bottom of page