The Ethics and Risks of Artificial Intelligence, and How to Hire the Right People

The newest pirate on the Fortune’s Path pirate ship is Sharon Chou, data product advisor. She has designed and implemented a number of data science projects for SaaS, tech support, online education, healthcare, tax advisory and human resources organizations.

Tom and Sharon cover a wide ranging set of topics surrounding the world of data, including:

  • The differences between traditional computing and quantum computing

  • The ethics and risks of artificial intelligence

  • How AI in mortgage and loan decisions can be used for good when it comes to fairness and ethical considerations

  • The application of AI in the hiring process

  • The role of emotion in decision-making

The upshot? A healthy dose of skepticism around information sources, combined with the scientific method, are what companies should bring to getting the most out of their data, as well as incorporating AI into business practices.

Tom Noser — Sharon. It is wonderful to see you. Thank you so much for joining me today. I really appreciate it.

Sharon Chou (00:34.944)

Yeah, it was great meeting you the other day, Tom. Had a great party.

Tom Noser (00:38.866)

Yeah, thanks. I'm very interested in your history in as an academic and how it relates now to your practice as a consultant. And so can you tell me, I think you began as a researcher in physics, is that correct?

Sharon Chou (00:59.252)

Yes, actually my undergrad major was electrical engineering and computer science. It's a dual part major.

Tom Noser (01:05.982)

Okay.

Sharon Chou (01:10.564)

That was the time when I found myself really interested in building things. So the electrical engineering side is the physical aspect of building things. And the computer science side is the writing code and operating the things that you build. So I thought that was a great combination for me. And as I progressed in my major, I found that a lot of interesting projects in the industry tend to be toward the software programming side of things. And so I gradually shifted toward that side.

Abstract painting by Sharon Chou

Painting by Sharon Chou

Sharon Chou (01:56.476)

In my numerous research projects, one of them happened to be very mathematically and statistically focused on circuit design at the very tiny level. When you get down to the really small near atomic level, a lot of the modeling in circuit design ended up being very mathematical and statistical when you try to model the interaction of different atoms and different tiny transistors.

So that's how I moved forward the more physics aspect in my work. In my PhD research, it was a combination of material design and quantum physics. So we were trying to create a surface that would shoot out electrons with very little input.

In trying to model this surface, in a way we were trying to cheat nature, because nature really likes surfaces to be stable and not just shooting electrons out all the time. It was really interesting trying to find the right conditions under which certain kinds of material surfaces can actually be really active in shooting out electrons.

Then we can put these surfaces into energy conversion applications. So we can convert heat energy into electricity, and this heat energy can come from all different places. It can come from the sun, it can come from jet engines, and it would very efficient micro energy converters.

Tom Noser (03:57.582)

Well, there's a lot to unpack in there. It's really interesting. I want to talk a little bit about a circuit. Let me share my layman's understanding of circuits and how they relate to how computers work, and please expand and correct.

So everything inside a computer, it boils down to a 1 or a 0. It's binary. And my understanding is that the circuit can either have a charge or not have a charge, a portion of it. Now I'm really getting the area that I know nothing about. So I'm making this crap up. But there's like a, the more sort of charges a circuit can hold, the more processing power it has. The more, as you say, electrons you can stick on there. There's either a one or it's either, it's there or it's not there, a one or a zero.

But the way circuits process today is in a linear fashion. It's flowing through the circuit and it's either there, you know, anyway, you can see I've already crashed and burned here.

Help me understand how does a circuit work and why it would be beneficial to have a surface which can throw off electrons, which sounds like a very different design than what we're used to for circuits.

Sharon Chou (05:25.744)

To clarify, the surfaces that would be shooting off electrons easily would be used more like...

Tom Noser (05:32.145)

Mm-hmm.

Sharon Chou (05:38.092)

Batteries are more closely related to that particular application than the circuits that we would use in our phones and computers. The circuits that we use in our personal devices, they are a combination of little tiny switches. So it's a matter of making the tiny switches and writing the code…

Tom Noser (05:41.522)

Okay. Then circuits.

Sharon Chou (06:08.152)

… to control the switches in a way that the circuit can generate the signals that you want. And that is the simplest way that I can explain it.

Tom Noser (06:19.154)

OK, even I get that. There's talk about quantum computing. Can you briefly define why quantum computing is different from traditional computing?

Sharon Chou (06:37.364)

Quantum computing is trying to create different states in a circuit when traditional circuits have the zero and one, which is when a switch is on versus off. But in quantum states, there are many possible states in between.

Quantum computing is trying to take advantage of the many, many states to make a lot of computation faster. This can be good or bad in a lot of very complicated numerical modeling, modeling weather phenomenon.

Sharon Chou (07:22.572)

For example, there are lots of calculations and a quantum computer can theoretically make that much faster. At the same time though, our current encryption methods, if somebody wants to crack a password,

Sharon Chou (07:44.476)

what would normally take hundreds of years right now might be doable in minutes with a good enough quantum computer. This is definitely a very exciting field right now. I know companies are throwing money at this.

But it is very hard to build a good enough stable quantum computer because you have to really lower that operating temperature to get that right fluctuation of quantum states. You have to get really low temperature, like very low. It's like colder than liquid nitrogen kind of low. Researchers are trying to get that operating temperature higher.

Sharon Chou (08:41.92)

It would be very interesting to see the latest updates in that area for sure.

Tom Noser (08:49.218)

I appreciate that. You started out in electrical engineering and computer science. Then you got interested, it sounds like, in batteries, or at least in the efficient storing of power. Is that right?

Sharon Chou (09:06.476)

I would say the efficient generation of power, in an energy converter. When you take heat energy from the sun, for example, how to convert that efficiently into electricity that we can harvest. So...

Tom Noser (09:21.943)

Right.

Sharon Chou (09:32.532)

That would involve a material that is very active in electron emission on one end, right? On the other end, it has to be, there would be a very good collector. That would create a good circuit. In fact, more of the inspiration was from vacuum tubes in the 1950s, where you would have this light bulb structure, suck the air out of it, and have two pieces of metal in there.

One piece of metal would be shooting out electrons and the other piece of metal would be collecting it and it would have this circuit inside this tube. What we were doing, yeah, years ago, was to really shrink down that structure.

It is micron sized, and one of my collaborators found out that a few micron sized for this vacuum tube conduction phenomenon is actually at its theoretical maximum efficiency, which is really cool because that is just the right size to be manufactured in all the semiconductor foundries that all can make micron size devices. This became pretty exciting. Now we can design materials and it could be made in an actual factory.

Tom Noser (10:50.359)

Yeah.

Sharon Chou (11:13.472)

That was pretty cool.

Tom Noser (11:14.786)

That is very cool. So tell me why you left research and went into business, because it sounds like your research was pretty exciting. And if you're a part of something that's like efficient conversion of sunlight into electricity, that's a history changing event.

Sharon Chou (11:35.624)

I would say that I did find some of the industry projects that I was involved in at the time to be more results driven in a way, with a faster cycle time. I was more drawn to that towards the end of my PhD.

Sharon Chou (12:05.056)

I realized that a lot of academia was a bit wrapped up in a lot of theoretical brainstorming, which is great. At the same time though, I feel like, hmm, surely there are more efficient ways to actually drive results forward.

Tom Noser (12:21.8)

It's fun.

Sharon Chou (12:34.652)

It does partially depend on the field that you're in though. Some fields are motivated to get moving faster. I say medicine tends to be that way when there is a lot of funding and donors who want to really push the research faster. The research tends to move faster that way.

I was looking forward to making impact under a bit faster cycle time.

Tom Noser (13:14.126)

It's interesting. So I believe that ideas change the world and that they're the primary driver of all of the historical forces around us are ideas. The development of an idea like you say, and that sort of brainstorming can be pretty rapid. But the practical implications of the discovery of that idea may take centuries.

But the birth of the idea itself to me is just an absolutely fascinating process. So I'm going to put you on the spot. If you don't want to answer this question, tell me, I don't feel like answering that right now. It's too hard to answer on the spot.

So my very limited understanding of Einstein's famous equation is that it said matter could be converted into energy. If you do that, a very small amount of matter can result in an enormous amount of energy if it's a pure conversion.

Can you talk a little bit about sort of how that idea has changed our physical world and kind of changed the course of history?

Sharon Chou (14:42.652)

The example that lots of people know about would be the atomic bomb.

Tom Noser (14:46.446)

Mm-hmm. Yeah. Mm-hmm.

Sharon Chou (14:49.844)

There's been so much history around that subject, and lots of scientists, brilliant scientists were involved in the making of the bombs. That is certainly the most major historical event of a technology that was directly...derived from that idea.

Tom Noser (15:20.614)

Mm-hmm.

It was a theory. I think it was a theoretical idea. And the bomb essentially said, yep, that's right. That's how it works.

Sharon Chou (15:25.437)

Yeah.

Sharon Chou (15:31.536)

This goes to show that when you have an idea, it is quite possible to develop technology that could be used for good or for bad. It goes for any, it goes for any technology. It goes for artificial intelligence, certainly, which...

Sharon Chou (15:57.244)

We should at best think of it as a tool and it is a neutral tool, even though many people have the tendency to either deify like a god, like, oh, AI is going to solve everything.

Sharon Chou (16:14.128)

Or they would fear it so much they demonize it and say, oh, AI is going to take our jobs. It is going to destroy civilization.

Neither of those thinkings is productive. We really need to be focused on getting a better understanding how it actually works. That's where a lot of people are tripping up that I see.

Sharon Chou (16:41.74)

Fortunately, I work with a lot of people who are very open-minded in trying to understand just how all these algorithms currently affect our lives.

Tom Noser (16:54.11)

Talk a little bit about AI. I'm glad you transitioned us to that subject. Thank you. Are you seeing intelligent applications of that technology right now? What are the ways it's being applied that excites you? We'll just start with that.

Sharon Chou (17:15.012)

I would say AI is a catch-all term for anything that seems like it simulates human intelligence in some way. I personally use the machine learning model a lot more, because it is a way to predict what would happen…

Tom Noser (17:21.247)

Yeah.

Sharon Chou (17:44.556)

based on historical past events. A lot of machine learning is trying to find what happened in the patterns of what happened in the past and use that to apply to future circumstances. In some cases it works really well and in some cases it may not work so well.

Sharon Chou (18:14.22)

For example, in self-driving cars, and these are cars that try to anticipate what a human driver would see on the road and try to react accordingly like a human driver would. However, it quickly became obvious that it is not simply recognizing, ‘here's where my wheels are, and here's where the road is,’ because even the problem of, okay, here is where the road is, actually gets very complicated for machines. We humans have millions of years of advantage in evolution. We have eyes, and we have the...

Sharon Chou (19:05.76)

visual cortex, the brain that can interpret what we see. We have a lifelong experience of, ‘this is the road, it can look differently under different weather conditions.’ Sometimes there's a rock. Okay, yeah, we should avoid the rock. When you have machines though, machines don't have...

Sharon Chou (19:30.468)

any of that experience and it could very well get tripped up by a metal plate on the road, it thinks, oh, I ran into a truck, I have to stop right now, right? But it's just a metal plate on the road. There are many such examples where machine learning algorithms would not function as well as a human could.

Sharon Chou (19:53.204)

In this case, in the self-driving car, right now, the best compromise would be to have more select tech pieces. More like an assistant to the human driver, because we would not want the human driver to be complacent, to just completely relax, sitting inside the self-driving car.

Tom Noser (20:16.396)

Yeah.

Sharon Chou (20:20.808)

Right? On the one hand, it's dangerous to let the machine take over completely. But on the other hand, humans by themselves are not 100% reliable drivers, I'm sure we know.

Tom Noser (20:36.266)

Yeah. Yep.

Sharon Chou (20:39.072)

As the current technology stands, the best of both worlds is to find that balance point where we use the technology as a helping hand without completely letting it take over.

Tom Noser (20:53.013)

Questions of artificial intelligence or machine learning always eventually run into questions of ethics, in my opinion. So to take your example of self-driving cars, one of the questions an algorithm, which is driving a car, has to be able to answer is, is it okay to drive on the grass? Human beings are able, as you say,

Sharon Chou (21:17.47)

Mm-hmm.

Tom Noser (21:24.126)

to make a judgment about that, the ethics of driving on the grass, very intuitively - we're not consistent about it - but where we can do it very intuitively. Whereas a machine is going to, as you said, wouldn't even recognize grass as road.

That's a that's a full stop. And now if you want to introduce the idea of, can drive on the grass, it's like, when and under what circumstances?

Sharon Chou (21:51.548)

Yeah, quickly it gets very complicated for the machine.

Tom Noser (21:55.182)

The machine learning to me is an interesting idea, but I'm not actually sure if machines learn in the sense.

There's different ways to define learning. One way is to say that learning is an act of creation and destruction. You have to destroy your previous conception of something and replace it with a new conception. That's what

Tom Noser (22:25.214)

learning is, even if your previous conception was ignorance, you have to give that up and replace it with some new construction. If that's the definition of learning, I'm not sure machines actually learn.

Sharon Chou (22:38.956)

The machines, they learn in a way that is purely limited to examples that you input in the machine. So say we like...an algorithm to tell apart dogs versus cats, and you would input lots of dog photos and lots of cat photos into the algorithm, and the algorithm should hopefully find what are the dog photos have in common, what are the features, and what do the cat photos have in common.

Sharon Chou (23:19.356)

Then determine when the next photo that you see is it a dog or a cat or is it neither. And the more photos of dogs and cats that you give to the machine, the better it would be. In theory, this is where, this is where that idea of data curation

Sharon Chou (23:47.792)

and data labeling becomes very critical. When you give the machine dog photos to learn from, you have to make sure the focus is on the dog and it doesn't matter what background the dog is in front of. The machine should completely ignore the background.

Sharon Chou (24:15.648)

The background could be a beach, a yard, a football field. If the dog photos end up all in someone's yard, then when you give the machine a dog at a beach, it would not be able to tell that is a dog because it would think, oh yeah, it does not look familiar. It's on a background that I have not seen before.

Tom Noser (24:37.966)

To me, that's really interesting, because it introduces the idea of bias within AI, bias within the decision-making, the rubric that the machine is creating to identify dogs. This is something that, in fact - I want to understand whether or not this is a risk, from your point of view - can, when you've created an algorithm and you're

Tom Noser (25:13.678)

teaching a machine to make decisions based upon the data you fed it and the algorithm you've given it, does it build its own decision criteria? Can it tell me what that criteria is? If I can I stop the machine at some point and say tell me how you're deciding whether something is a dog or not?

Sharon Chou (25:23.2)

Yes.

Sharon Chou (25:35.084)

There are simpler and more statistical, more obviously statistical algorithms that can tell you more transparently what the rules are. Logistical regression is an example of such an algorithm that would more transparently tell you, it is based on feature 123 that is telling me, okay, this is a dog image versus a cat image.

Actually, image recognition is not the best example to use. with logistical algorithms, but deciding whether to give somebody the loan or deciding whether to give somebody a parole.

Those would be more higher stakes examples where you have to be very transparent about why a decision was made versus the other.

Tom Noser (26:43.042)

The machine renders a decision based upon the data it's been fed and the algorithm that it used to crunch that data. But then you have to go through a separate process in order to understand how did it arrive at that decision, which is this analysis, is that correct?

Sharon Chou (27:03.204)

So when you evaluate a model, you would evaluate with cases such that you can backtrack, you can look inside the model, and it should tell you based on these kinds of features. That's why the decision was made.

Tom Noser (27:22.358)

Mm-hmm.

Sharon Chou (27:23.956)

Logistical regression can tell you that. There is a lot more effort now to try to get more complicated algorithms to give you the reasoning behind it.

Tom Noser (27:35.755)

Yes. You mentioned all of the evolutionary advantages human beings have over machines. One of them is we can be metacognitive. We can think about our own thinking. As far as I know, there are not machines that think about their own thinking.

Sharon Chou (27:55.844)

Not as far as I know either, which is good. Good, good for us for now.

Tom Noser (27:58.694)

Yeah, that's right. But it's extremely important, though, for accountability to be able to explain how you arrive at your decision. So you talked earlier about the demonization and the deification of artificial intelligence. I love those phrases, by the way.

Sharon Chou (28:15.167)

Right.

Tom Noser (28:28.406)

One of the demonizations that I've heard about it is that when you set an AI on its way, and you establish your algorithm to define kind of how it's gonna go about solving its problem, and then you throw the massive data at it, and it churns to reach its conclusion, you have no idea what it's doing in that time between giving it the assignment and having it come to its conclusion.

And it's really difficult for us to understand what process is it going to follow as it goes through that. It's a complete black box between the initial assignment and the result. We're just trusting the machine not to do something awful in that process, or come to some really horrible conclusion since they have no ethics.

Is that an overblown concern or is that something that you thought about like, yeah, that is kind of a risk.

Sharon Chou (29:29.712)

It is certainly a risk that we should all be aware of. That because machines have no ethics, it is completely up to the humans, the people who design the algorithms, and the people who give the training data to train the algorithms to...

Sharon Chou (29:57.46)

test it and make sure that it is not giving us unintended results. These days, there is the realization that the input data is more important than the machine learning model itself.

The machine learning model itself is the sausage maker.

Sharon Chou (30:25.224)

You turn the knob and it comes out sausage. But what makes a great tasting sausage versus a bad tasting sausage is the quality of the input.

Tom Noser (30:26.509)

Turn the crank. Right, right. Mm-hmm.

Tom Noser (30:35.862)

What goes in? That's going to make, I really appreciate that point, that's going to make the development of large language models a lot harder.

Today my understanding is the way we've done them is, ‘oh just go out and grab anything on the web' and whatever's on the web we'll throw it in there’ and that's our sausage. And since the web is full of lies and deceit and manipulation,

Sharon Chou (30:53.612)

Completely, yes, that's right. That's right.

Tom Noser (31:04.974)

the generative AIs is vomiting out all that garbage. If you want to have quality, your sausage-making analogy is awesome. If you want to have quality control in there, now you're talking about, I guess, licensing LMM, large language models.

There has to be some accountability on that input. And these things work the best when they have the most data to crunch.

Tom Noser (31:34.386)

Is that a major setback to think about the sourcing is now going to become a lot harder?

Sharon Chou (31:41.796)

I would say there would be an application where you would train more specific examples of large language models to be more geared toward applications that, you would want it to be not so hallucinating all the time. Then that's where you have to curate a smaller set of data, perhaps. But really make sure that it has all the stuff that you know that it has.

Sharon Chou (32:20.192)

Versus a really general purpose model, which is what we see, it's trained on a large part, or at least part of Wikipedia, part of some unpublished corpus of fictions written by various people, various news websites..

Sharon Chou (32:45.748)

Because it is not curated, like everything that's just floating out there is fed into the algorithm, the algorithm just will take it as facts. That's where we have to be very careful, making sure that, if we want to make sure that the machine will only give you believable information, then we have to feed it as such.

Sharon Chou (33:13.484)

And spend that time to curate and label the data. And that's what I suspect is where lots of time and effort is being spent. It's a very costly process, which I believe is not being done as much as it should be.

Tom Noser (33:38.062)

Let's go to your example you talked about earlier of granting parole or granting a loan. Those to me feel like use cases for a large language model or a big data set where you could control the quality of the data going in and it could significantly speed up and potentially improve decisions and also possibly remove bias from decisions for both of those cases. As you said, where the stakes are very high.

Let's say you've been hired as a contractor for a large bank, you’re Bank of America now, and they want you to make first mortgages affordable.

We can charge potentially a lower interest rate. Let's find out if we can charge a lower interest rate for first-time home buyers in exchange for locking them in as a lifetime

Tom Noser (34:36.87)

customer of Bank of America. If over the course of their career, we think they have high earning potential, we can afford to take a loss on their first 30 year mortgage because they're only going to stay in it for five or six years. They'll buy a more expensive home, maybe in the future, as rates come down, they'll have to pay a premium, blah, blah. This to me feels like a perfect use case for a lot of data to figure out…

Tom Noser (35:07.27)

Can we lock customers in? Will they sign a lifetime banking contract with us in exchange for a preferred rate early on?

Is that a reasonable use case for something like AI?

Sharon Chou (35:24.72)

It would be a great use case for machine learning models. The most tricky aspect in designing the model is to make sure you...

Sharon Chou (35:40.152)

select the right data features. In this case, the model should be on the simpler side, so it is easier to explain why certain decisions are being made.

The hardest part is to separate out the personal features, the factors that affect how trustworthy you are as a loaner.

Sharon Chou (36:10.28)

And separate that from any demographic factors that you happen to be lumped under. That is a really tricky thing to do.

That's where biases come in. We would be concerned with racial gender biases, all kinds of different demographic biases that come in, because certain…

Sharon Chou (36:37.804)

personal factors end up being correlated with these demographic factors, which results in model bias. That's where the data checks need to be very, very thorough.

You collect a lot of data about a lot of people and how they have been on time versus not repaying their loans in the past.

That would be the most important factor to me is to look at, okay, how trustworthy financially has this person been in the past?

Tom Noser (37:21.234)

That's really interesting. It's also a difficult problem for someone who's young.

Sharon Chou (37:27.957)

That is true. Because they have no record previously. How do you tell?

Tom Noser (37:50.186)

They have very little credit history. How little data is necessary to make a confident prediction about that initial loan?

The only the only thing we're locking somebody in for is to say, okay. well, your initial loan is at X percent is, prime minus three. (Bankers are vomiting all over America with that comment). But anyway, some discount. You could have conditions on it, obviously, of like, well, you lose this, if you demonstrate certain behaviors, you lose this preferred treatment.

Tom Noser (38:15.334)

Your loan reverts back to another rate if you missed two payments or whatever your credit score drops below a certain number, stuff like that. The whole concept of fairness in lending is really difficult. So are things like being male or female correlated with things like higher education?

Sharon Chou (38:30.876)

Mm-hmm. Right.

Tom Noser (38:41.802)

So achievement of higher education is correlated with earnings. I don't know if it's correlated with credit worthiness. There may be. And so in some cases I think you end up, is it even possible to be utterly blind to things like race and gender?

Sharon Chou (39:02.256)

Ah, we always really try to be totally blind. It might be impossible to be completely blind. Because then you have to selectively stratify the datasets and consider each subpopulation by itself and see what kind of value ranges that you get within each stratum of people. That would be one way I can think of to make sure you're representing everyone equally, right? But that's totally tricky.

Tom Noser (39:36.977)

Mm-hmm.

Sharon Chou (39:49.828)

I'm sure, yeah, like your zip code, for example, where you live, like determines a lot about these outcomes.

Tom Noser (39:54.846)

There's other kind of tricky issues about what if gender is somehow positively correlated with credit worthiness for one gender or many genders, depending on how you think about gender, and negatively correlated for other genders. In which case,

Tom Noser (40:24.826)

is ignoring that information ethical if it actually can be statistically proven to have a strong correlation to an outcome?

Sharon Chou (40:39.348)

Oh wow, that is a very interesting theoretical conundrum.

Tom Noser (40:41.978)

Yeah, that's right. So I'll tell you why I love academia. This is like these are all the kinds of things that I really enjoy. Right, right. It's fun. But in product development, I mean, you talked about how you like the forward momentum. I think this is one of the reasons why product management appeals to me is that

Tom Noser (41:10.066)

you have to answer these same questions. The sooner you answer them, the better off you are.

Tell me about the product development process you've been involved in, and making something that was a lot of fun and rewarding.

Sharon Chou (41:29.85)

I've been consulting with a recruiting company, a firm that does a lot of hiring various folks, mostly in technology. Initially, I signed on to work on the data pipeline processes. It was deliberately left vague and unspecified exactly what those are.

Sharon Chou (42:02.528)

Which is great because I wanted to see how the current internal processes are before deciding where the gaps are and how to go about patching them.

I was looking at the internal workflow of how the recruiters would work to make the hire, like starting from sourcing a lot of candidates and then funneling down through the various interview processes, a resume interview, and then presenting to the client companies these are some...

amazing candidates for you to interview and hopefully you pick one out of them. Throughout that process, I was looking at how to make this more data informed. There is this effort in trying to make recruiting and hiring more data aware. The idea is that we have lots of information about candidates and we have more and more information about the employer, the employer companies.

How do we use that information to find the right match? That initial dating process, how do we find the best match? We would have to kind of look backwards.

Sharon Chou (43:43.352)

Who have been great matches in the past for these kinds of positions at these kinds of places? And then work backwards. Say, in the past, and at so and so large company, these software engineers have done very well. This was their resume listing and their interview notes. Now we’ll try to find those same kinds of candidates going forward.

That's the process of how I envision it to be. Applying more data into this, making it more data informed. I was trying to work towards that. A lot of the times recruiters are very bogged down by...

Sharon Chou (44:46.588)

logistical processes, like just trying to get this form out, trying to contact a bunch of candidates and getting their details straight.

They were already bogged down with that, and it's very hard for them to think this a bit removed, like, how do we focus on making a better hire? So then we can focus on making that software they're using easier to use.

Sharon Chou (45:16.502)

We have so many complicated gizmos here and there. Let’s make the interface look simpler so they can focus on asking the right interview questions, for example. One of my projects was how to simplify that UI, how to simplify that workflow for them. And also,

Sharon Chou (45:39.644)

reworking some of the interview questions that they ask during screening: these are some of the interview questions that you can ask. Make it more of a matter of making it easier for them from the UX perspective, so that they can focus on what's important.

Tom Noser (46:03.183)

It's a hugely challenging problem. So I have a number of, of course, theoretical questions to ask.

I'm not convinced there's a correlation between resumes and performance. If I'm looking at my best performing

Tom Noser (46:31.338)

people and it turns out all of them went to Tennessee Tech. Is it fair for me to conclude that I should only hire people from Tennessee Tech? Or, as a scientist, statistician, data scientist, at what point do you have a large enough sample size to make that kind of a conclusion?

Sharon Chou (47:01.96)

That is certainly a tricky question, right? Because your own sample size is always kind of small. The only places where you have a large sample size would be very large companies. And government, indeed. Yes.

Sharon Chou (47:25.904)

It becomes what factors are the most important in deciding whether someone is a good hire? Research literature suggests that it is the skill based factors. What the job actually entails and what somebody needs to be able to do on the job.

It could be they need to know how to code, how to write documentation, how to work cross functionally with a bunch of different stakeholders. Different jobs have these skill listings that sometimes is not apparent…

Sharon Chou (48:15.26)

in the job description, which is another problem. The job description should reflect what is actually needed to be done on the job as much as possible. The candidates can be informed about what is needed.

Tom Noser (48:20.191)

Yeah, right.

Tom Noser (48:35.026)

I have a beef with job descriptions. I think job descriptions at best, at the very best, describe a mediocre candidate. Every job, in my opinion, people who really excel at it remake that job in their own image. They bring a set of skills that you don't even know you need…

Sharon Chou (48:55.936)

Yeah, it certainly happens sometimes.

Tom Noser (49:05.166)

In that job because they're able to anticipate needs you didn't know you had. Then they're able to fulfill those needs in ways that you didn't even know that you had.

Sharon Chou (49:10.645)

Right.

Sharon Chou (49:15.444)

Yeah, job descriptions are very tricky. When done well, it is at least helpful to help find a good hire. You can have a list of skills that you really should have to do the job at a minimum level. Okay. Then you can have lists of…

Tom Noser (49:21.598)

I think they're worthless. Ha ha ha.

Tom Noser (49:32.299)

Right, right.

Sharon Chou (49:43.512)

a more wishful set of skills that, okay, to do this job really stellarly, these are the skills that you should have. Use those two categories of skills to inform the sourcing and the interviewing process, which I feel like is a good general guideline to go about hiring.

Tom Noser (50:11.966)

It's more general probably than we want to admit. So there are certainly jobs like, I don't want someone conducting surgery who didn't go to medical school and who has done no surgeries. Now every surgeon at one point has done no surgeries, but I don't want anybody being unqualified for that. But for something like, I want you to be a salesperson.

Sharon Chou (50:22.409)

Or is done no surgeries at all. Yeah, exactly.

Yeah. Haha. Yeah.

Tom Noser (50:41.286)

Or I want you to be a product manager, or even I want you to be a software engineer, certifications in formal training, in my opinion, start to become less and less important.

When I interview people about product management, I ask them, have you ever wanted to start your own business? That question reveals a lot about

Tom Noser (51:09.954)

how somebody looks at the world and how they see themselves in it. People who start their own businesses are sort of by definition misfits, because they, they can't see themselves working with….even if it's just the conditions of my own life. I would love to see data inform, hiring.

Sharon Chou (51:20.384)

Well, he's like, oh, you know, something isn't quite right, so I'm going to change it because, yeah.

Tom Noser (51:39.87)

My concern goes back to our sausage making analogy earlier, is that all of the data today involved in hiring job descriptions, job performance reviews, are garbage and it’s not going to make it better. It's just going to sort of reproduce the current bias laden system. My hypothesis is that..

Tom Noser (52:07.638)

We hire people we like, and then we go through a process to justify hiring them.

Sharon Chou (52:11.316)

Yeah, that is totally the way lots of companies are doing it right now.

So how do we fix that? How do we fix that broken system?

Sharon Chou (52:20.7)

There needs to be a shift in thinking because

Sharon Chou (52:28.928)

The hiring team needs to be very disciplined in making sure that you always follow a structured and organized process during interviews, during resume evaluation, during interviews especially.

It starts off by standardizing those interview questions, so ask everyone the same questions, preferably in the same order, even if it feels awkward.

That is really how you can compare the candidates at all. Give them the same set of environment, the same set of circumstances. That's how you can evaluate their answers. Ask them the same questions. There are even additional techniques, like when you evaluate their answers, look at the same question, look at only one question...

Sharon Chou (53:33.884)

look at how everyone has answered question one and then assign a score to them. Sometimes you might even have to do it in a different order of candidates, like look at all their question ones in a different order, would you still score them the same? Would you score them the same in the morning after you've had coffee or in the afternoon where you're a little tired? There are all these....

Sharon Chou (54:02.66)

factors where biases can creep in. The evaluation process is supposed to minimize those biases. You would have another person from your interviewing team look at the same answers. Is the other person gonna score them the same way? Try to combine all these evaluations into the final decision. That’s how it is supposed to be done.

Tom Noser (54:32.258)

Right. What you just described to me sounds like a machine. The process an AI would follow.

Sharon Chou (54:35.672)

It does, right? It almost sounds algorithmic. That's the idea of introducing a more scientific process into what we would traditionally think as very intuitive, like hiring. It's traditionally very intuitive.

Tom Noser (54:41.013)

Mm-hmm.

Tom Noser (54:56.014)

I liked what you talked about earlier about the best, use cases for AI involve collaboration between the person and the machine and not the person eating a sandwich while the machine drives them to San Diego. The application of AI in a hiring process can work effectively as,

Sharon Chou (55:13.153)

Yep.

Tom Noser (55:24.042)

a bullshit caller of, well, the algorithm you told me to use to evaluate candidates resulted in this list of candidates ordered top to bottom. You chose number 19 out of 20. What did you either miss in the algorithm, or are you lying to yourself about something? You're choosing 19 not for

Tom Noser (55:53.002)

defensible reasons. That kind of decision cop could be a very beneficial application.

Sharon Chou (56:04.096)

At the end of the day, after you gather all the data that you can, that final decision is, human, right? The human needs to aggregate all the data and synthesize them into a decision. And that's the part that machines cannot do right now.

Tom Noser (56:14.742)

Yes.

Tom Noser (56:29.346)

I've heard all decisions are ultimately emotional. That it's impossible to make a decision in a purely unemotional context because you can't...there's an outcome. Every decision is leading towards some kind of outcome. To determine between outcomes, you have to make some kind of ethical or emotional decision.

Sharon Chou (56:58.552)

That actually reminds me of a book title, The Emotion Machine, if I remember that title correctly by Marvin Minsky. He was an AI researcher from the early days, teaching at MIT for a very long time. He wrote that book and fundamentally it's how humans have all these different mental processes and emotion being a pretty important one.

Can we ever have algorithms that simulate those mental processes? That is a very interesting question to me.

Tom Noser (57:33.038)

It's way up there.

Tom Noser (57:49.226)

It is. I love that. We'll end there. I want to bring you back another time and talk more about the work you're doing now. I'd love to talk about your experience at MIT, and how basic research and then applied research has led you to your current life. You're a practical scientist.

Sharon Chou (58:01.592)

Sure.

Tom Noser (58:16.318)

In that, there's basic science and then you're a practical scientist. You're applying the scientific method to solving business problems, particularly as they relate to data and machine learning.

Sharon Chou (58:22.942)

I would actually circle back to the early question that you asked earlier, which I haven't fully answered. So my experience as a researcher, it's really made me approach problems in a very...

Sharon Chou (58:44.352)

scientific way. In research in the sciences, we always be expected to learn about past literature. Be thorough about knowing what other people have done in the past. Any related research, that's also how you found out if you can, if somebody has done the same thing as you were trying to do before.

Sharon Chou (59:14.827)

By knowing what lots of other folks have done in the past, you can decide how to make something better, how to build on people's past work, to improve the status quo, which in business is also a similar idea. You have to improve upon the status quo, make something bigger faster. Study what competitors are doing, competitive analysis, market research.

And at the same time be discerning and somewhat skeptical about the data sources that you find. This is a really important aspect of science research that would well inform business approaches as well, is to

Sharon Chou (01:00:16.672)

study what different stakeholders, who have different sets of motivations and different vested interests, the way that they present information is very much affected by that. Keeping that in mind at all times is very helpful in working in industry or generally.

Checking assumptions anytime you see a piece of literature come your way. This looks like a great research paper. Oh, is this a sponsored research? Oh, Okay. Yeah.

Tom Noser (01:01:00.17)

Yeah, I love that. I absolutely agree that we should have respect for the work that's been done before us and be skeptical of all of it. It's great advice.

Sharon, it was a pleasure. Thank you so much for joining me today.

Sharon Chou (01:01:18.74)

Yeah, it's a great chat. Yeah, pleasure. Definitely. Yeah.



Previous
Previous

Fortune’s Path Adds Stanford and MIT Alum Sharon Chou as Data Product Advisor

Next
Next

How Product Teams Get More Value From Data