Key Takeaways from AI for Social Good | blog.datasciencedojo

Chief Data Scientist and CEO of Data Science Dojo, Raja Iqbal, held a community talk on AI for Social Good. This discussion took place on January 30th in Austin, Texas. Below, you will find the event abstract and my key takeaways from the talk. I've also included the video at the bottom of the page.

It's not hard to see machine learning and artificial intelligence in nearly every app we use – from any website we visit, to any mobile device we carry, to any goods or services we use. Where there are commercial applications, data scientists are all over it. What we don't typically see, however, is how AI could be used for social good to tackle real-world issues such as poverty, social and environmental sustainability, access to healthcare and basic needs, and more.

What if we pulled together a group of data scientists working on cutting-edge commercial apps and used their minds to solve some of the world's most difficult social challenges? How much of a difference could one data scientist make let alone many?

In this discussion, Raja Iqbal, Chief Data Scientist and CEO of Data Science Dojo, will walk you through the different social applications of AI and how many real-world problems are begging to be solved by data scientists. You will see how some organizations have made a start on tackling some of the biggest problems to date, the kinds of data and approaches they used, and the benefit these applications have made on thousands of people's lives.You'll learn where there's untapped opportunity in using AI to make impactful change, sparking ideas for your next big project.

Key Takeaways

  1. We all have a social responsibility to build models that don't hurt society or people
  2. Data scientists don't always work with commercial applications
    • Criminal Justice - Can we build a model that predicts if a person will commit a crime in the future?
    • Education - Machine Learning is being used to predict student churn at universities to identify potential drop-outs and intervene before it happens.
    • Personalized Care - Better diagnosis with personalized health care plans
  3. You don't always realize if you're creating more harm than good.

"You always ask yourself whether you could do something, but you never asked yourself whether you should do something."

  1. We are still figuring out how to protect society from all the data being gathered by corporations.
  2. There is not a better time for data analysis than today. APIs and SKs are easy to use. IT services and data storage are significantly cheaper than 20 years ago, and costs keep decreasing.
  3. Laws/Ethics are still being considered for AI and data use. Individuals, researchers, and lawmakers are still trying to work out the kinks. Here are a few situations with legal and ethical dilemmas to consider:
  • Granting parole using predictive models
  • Detecting disease
  • Military strikes
  • Availability of data implying consent
  • Self-driving car incidents
  1. In each stage of data processing there are possible issues that arise. Everyone has inherent bias in their thinking process which effects the objectivity of data.
  2. Modeler's Hippocratic Oath
    1. I will remember that I didn't make the world and it doesn't satisfy my equations.
    2. Though I will use models boldly to estimate value, I will not be overly impressed by mathematics.
    3. I will never sacrifice reality for elegance without explaining why I have done so.
    4. I will not give the people who use my model false comfort about accuracy. Instead, I will make explicit its assumptions and oversights.
    5. I understand that my work may have an enormous impact on society and the economy, many of them beyond my comprehension.
    6. I will aim to show how my analysis makes life better or more efficient.

Similar Posts
            </div></div>

This is a companion discussion topic for the original entry at https://blog.datasciencedojo.com/my-key-takeaways-from-the-ai-for-social-good-meetup/