Role of AI in Career, Business, and Beyond | Ben Gold

0 comments 125 views

In this conversation, Ben Gold, an AI strategy consultant, delves into the transformative role of AI in various aspects of our lives. He explains its impact on careers, workplace efficiency, ethical considerations, and future implications.

Ben Gold is an AI strategist and keynote speaker with over 20 years of technology experience. He specializes in guiding companies through the AI adoption process with customized strategies. Ben offers consulting services that include assessments, pilot programs, training, strategic roadmaps, and impactful keynotes, all aimed at helping organizations achieve tangible results. By demystifying AI and translating its complexities into accessible strategies, Ben enables businesses to harness AI’s power to work smarter, faster, and more effectively.

Takeaways

  • AI can be leveraged by job seekers to improve their resumes, define career goals, prepare for interviews, and enhance job retention.
  • AI is transforming the workplace and education, enabling new teaching methods and improving business processes.
  • Centralized databases and AI analysis can provide organizations with actionable insights for decision-making.
  • Implementing AI at the C-level and developing policies is crucial for the successful integration and utilization of AI.
  • AI can significantly improve efficiency in sales by automating manual tasks and freeing up time for sales reps to focus on more important activities.
  • AI can be used in product management to gather accurate usage data and optimize products based on customer needs and preferences.
  • Ethical considerations are important when using AI, and organizations should carefully evaluate the biases present in AI models and double-check their decisions.
  • Deepfakes pose a significant threat, and individuals should be cautious about believing everything they see or hear, especially in the age of AI-generated content.
  • The future of AI is uncertain, but it has the potential to revolutionize entertainment, robotics, and various industries. The impact on the job market and society as a whole remains to be seen.

Connect with Ben

Sponsors

Webflow – Create custom, responsive websites without coding

MeetGeek – Automatically video record, transcribe, summarize, and share granular insights from every meeting to any tool

Podcast Links

Connect with Vit

Timestamps

00:00 Introduction

05:01 Leveraging AI for Career Advancement

09:45 AI is Getting Better Overtime

15:25 Transforming the Workplace and Education with AI

20:14 Opportunities for Organizations to Leverage AI

22:49 Implementing AI at the C-Level: Developing Policies

26:43 AI’s Efficiency Gains in Sales

28:48 AI in Product Management: Optimizing Products

32:41 Ethical Considerations and Bias in AI

38:42 The Dangers of Deepfakes

44:29 The Future of AI: Entertainment and Robotics

48:27 Societal Implications of Widespread AI Adoption

Transcript (Edited by Vit Lyoshin for better readability)

Vit Lyoshin (00:00.742)

Hello everybody and welcome to Vit Lyoshin Podcast. Today we have a new guest, Ben Gold, who is the AI strategy consultant and keynote speaker with over 20 years of experience in technology.

Welcome, Ben. Nice to have you here.

Ben Gold (00:16.228)

Nice to be here!

Vit Lyoshin (00:18.31)

Today I invited you to talk about AI, how people can use it for their careers and the opportunities that are there for organizations. And we will also cover a couple of questions about ethics and future trends and where it is going.

But before we jump in, could you share a little bit about yourself, how you became interested in AI, and the work that you do?

Ben Gold (00:53.316)

Yeah, thank you so much, Vit. And I appreciate being on this podcast. So I have 20 years of experience, as you mentioned, in the technology field. The last six years have been with different AI products. When I was in the corporate world, I was with a call tracking company that had this really cool AI tool. It was not generative AI. At the time, it was just an amazing capability where it could look at 30 ,000 phone calls for call centers and be able to go through the calls, make transcripts, and say this was a good call or bad call. So that really, I was working with the data science team, with our clients, how do we leverage this actionable insight, this idea that this technology can compress all these unrelated ideas and then instantly give you actionable insight. That was what really fascinated me.

Now, chatGPT or generative AI when that came out at the end of 2022 was a step further where now this capability was basically unleashed to the masses. What used to be something that Amazon and Google and Apple and Netflix would use with really expensive data scientists and algorithms, now suddenly the rest of the world could get some insights and can start using AI to do lots of different creative, I should say, activities and work-related tasks.

So I have been, for the last year, focused on AI consulting. What I do is help professionals, organizations, and businesses, to deploy AI and to navigate the minefield of what’s the best way to have a secure and ethical deployment. How do you upscale the people that are really not AI savvy, but not bore the people who are AI savvy? How do you create that culture within organizations so that you can get the people who are better with AI to help and work with? And make sure that everybody in the organization has access to the same tools.

Vit Lyoshin (03:19.526)

I see. So you’ve been doing this for six years now and you mentioned a couple of things about how you help organizations and especially people who are new to this. So why don’t we start with the career aspect of it and how people, job seekers specifically for example, how they can leverage AI and find their next career move.

Ben Gold (03:44.228)

That is where I started, my first application looking at what chatGPT could do or these other models, that was the first use case. I’m a job seeker. How can I use it to improve my resume? And there’s a lot of ways that you can give certain prompts. And if you know what to ask for, it can help you make your resume more clear.

If you’re trying to define your career goals, throw in an aptitude test, throw in what you like to do, throw in an old version of your resume, and it can suddenly spit out five other, five different avenues you can pursue. These are things that used to take weeks or even months with a normal career process now being done in a one-hour session. There is the whole point of being able to write cover letters, to be able to prepare for interviews.

So there’s this component of finding a job with AI. But I think the more important one is keeping your job with AI. And it’s really important that for those that are employed, especially, it doesn’t matter what your role is. You could be in marketing, sales, HR, or product engineering. All of these roles have AI technologies that can be applied that will basically eliminate the manual test.

So it’s important to really, to be on track and to be aware of what are the updates, what are these tools in your industry, because every company is asking the same question, how can we leverage this to be more efficient, to improve our return on investment? And if you’re the employee that says, hey, I’ve tested Mid Journey and I’ve tested Dolly3 and I’ve tried these video creation programs, this is what I think, that’s gonna make you a more valuable employee.

Vit Lyoshin (05:39.974)

I see. So are there any sort of assessments for people who are employed, like assessments for the job that they perform or like a development plan that they can create using chartGPT or other models and go from there basically? Is that the idea?

Ben Gold (05:57.476)

Well, it’s, you could use, so I’m not, when you say the word assessment, it can be interpreted in many ways. So I’ll choose to interpret it like assessing what I should be doing with my job. So as an employee, how can I do better? And I look at that as an ideation exercise.

So you could say to ChadGPT, hi, this is my role within an organization and this is what my organization does and maybe show something from the website. This is my job title. I have these goals to achieve. Help me identify projects that are doable, that would improve my role or would improve the company. And give me a timeline of energy and effort and what would I need to do to carry those out. And so that’s a great use case of using AI to understand how I could improve where I’m at or how to improve myself.

Vit Lyoshin (06:54.79)

And I guess it also can add on that, like what certification maybe you need to get or maybe some books that you can read or maybe some organization to go and become a member for networking and for getting your knowledge some other ways, right? It’s just going to get you that list of actions, if you will, to do not necessarily how to do a job for this particular company, but in general and in this.

Ben Gold (07:24.612)

That is, and the good point you just made here is, I’d like to say that I’m not a technical person. So I can talk to data scientists, and I understand them. I can talk to programmers, but I am not a coder. And it’s very possible that you don’t need to learn, I mean, if you want to learn Python, which is the programming language that goes very well with data science, absolutely you can do that.

So you don’t have to be a technical person to leverage AI. And so that’s what I try to explain to people. There’s this mythology that either AI is going to destroy the world. So you hear that from people. Or AI, it really isn’t that good because they tried a free version of ChatGPT six months ago, and the output wasn’t that great for this person. But what I help people do is to look at the tools out there and I show them what’s available because it is going to transform our workplace. It is transforming it. Whatever you’re looking at today is the absolute worst version of the AI out there.

So for example, I was on Twitter and there was somebody that was criticizing this new product called Luma.ai, which was a kind of a video generation from a text prompt. And he was going through this frame by frame analysis of why this is a terrible animation. Because hey, the monster’s head was green here, but now it’s light blue over here. And I was like, when the model gets 100 times more powerful or 1,000 times more powerful, those issues will go away. And you need to kind of, don’t get caught in the tree in front of you to miss the forest.

This technology is not going away.

Vit Lyoshin (09:47.91)

It needs time to train itself or buy more information, right? It’s like I heard the story before about Tesla that every car essentially has its own server in it and analyzing all the roads it drives regardless whether you’re using autopilot or not and then it synchronizes to the main server and basically this is going to be the best app or the best model for self-driving that Tesla is building. So it’s actually an IT company, not a car manufacturing company to some degree. And something like that doesn’t happen overnight. For AI models, you need to have some time to train the model, right? So that’s the key here.

Ben Gold (10:31.14)

It is. And that’s a great story there because I heard the same thing, that’s one of the things that Tesla’s done that has given an advantage over the competition is because all of their cars are gathering data in a way that other companies are not.

Now, you brought up a good point. And I should say that right now there’s an AI arms race. And what that arms race is, it’s on so many different levels. It’s company versus company. And when the arms race is about what can you do inside of a model and when do you release it? So you have some people that are for safety and some people that are for innovation and the innovators are winning. And that’s not, I don’t think that’s a particularly good thing.

There was a paper that was published last week by a former openAI employee. I don’t know if you heard about that. And he was talking about these orders of, it’s OOM, orders of magnitude. And what he described was the fact that if you take the last five years and you look at the chatGPT 2.0 up to the new version, the 2.0 was like a preschooler level. The 3.0 was basically middle school. 4.0 is like a high school level.

If you continue this progression, and his argument is that now that we’re investing billions and then tens of billions and then hundreds of billions into these server farms that are going to create compute power if you were to carry this out five more years, you would have super intelligence. OK, so you go to art, you have what’s called artificial general intelligence, which his argument is that by 2027, which is where a computer can function as well as any human in a job role. And that’s pretty, you think about that.

Imagine what that means for the world. If you just upload the data of a company and now you’ve got a sales rep that can do outbound calls, can run Zoom meetings, can sign contracts. Okay, that’s how I interpret AGI. I’m sure that technically I’m off on something, but that’s to give you a sense for where it is. He was arguing that it goes to the next level, that once you hit AGI, the AI can begin to train itself. And when the algorithms train the algorithms, that’s when they get smarter than humans. Right now, we still train the algorithm. We have some sort of control.

So I just thought that was an interesting one. Now, a third component, you’ve got these innovators versus the safety people. You’ve got these companies that are in this competition. And then you’ve got the country point of view, where the Chinese and the Americans are really the two countries that are in this big arms race to own or be in charge of AI.

Vit Lyoshin (13:24.07)

Wow, okay. Well, it can be scary to some degree and at the same time exciting, right? Because some people may say, well, we need more people, but we can’t hire. So here you go. You just push a button and you spawn like a hundred bots and they do a great job. But some other people say, no, no, they’re going to take my job. I don’t want to do that.

Ben Gold (13:47.652)

And that’s the part that I’ve, you know, there are parts where I’ve seen where you implement AI, the company becomes more efficient, they hire more people. I’ve seen other situations, you implement AI, you have automated a task and you let people go. I do think that when we hit AGI, that probably more jobs will be lost than created. That’s my opinion.

Vit Lyoshin (14:10.982)

Yeah, well, but then people can retrain as well and go do some other supporting jobs, right? So that’s my hope at least and my belief that we still need more people than we have right now to survive.

So anyways, you mentioned a little bit about how AI is changing the workplace right now. Do you have a couple of examples of what’s coming in the short term or what’s already happening?

Ben Gold (14:39.876)

I’ll give you an example. And this is a combination of workplace and education. So this week I was at one of my clients, which is a private school. And I’m talking with the teachers about how they have to change the way they teach. And this is as I’ll give you, this is a microcosm. I know it’s education, but think of this as any other company where there was this way that they did things, where they assigned lessons and they gave homework. Now the students can go and type into chatGPT and they can generate something that two years ago would have been an A paper like that, like, wow, you’re smart. But now it can be done pretty much on any device and that’s ubiquitous.

And so what we were talking with the teachers about was you need to change your homework. You need to challenge the kids in a different way. Have them use AI, give them an assignment and let’s be creative. Now you can do so many things. So you could have it where they now can talk to one of the characters. So you assign them to have a conversation with, let me think of a, you could be thinking of any major, Othello or Hamlet and be able to ask them questions and get a response.

And there are ways that you could be very creative on how you take this technology and shift the way that you allow people to understand new ideas. So that was really tough because inside the school, some of the teachers were really up and knowledgeable in AI while others were just beginners. And they don’t even understand what chatGPT does, much less how do you incorporate that in the plan.

But this is a microcosm of almost any company where you have a marketing team and there’s 10 people and maybe three of them really are using chatGPT for everything. And five of them are kind of like, well, they kind of know how to do a few things, but they haven’t really leveraged it. So it’s an adjustment for every business out there because they need to figure out how to optimize your workforce? How do you do it in an ethical way where you don’t just let everybody go? And how do you do it where you’re able to make it in a, I know we’ll get into this about bias. And how do you infuse this new technology into your culture without destroying the culture or destroying the company?

Vit Lyoshin (17:18.022)

Yeah, I can see that could be a challenge and also that’s how it’s changing basically. People are sometimes faster or more efficient in producing documents, producing content or whatever. If we’re talking about marketing, for example. What about engineers and developers? It’s probably pretty much in the same place. They have some copilots, right? Automation testing applications or anything like that, that they can look at the code base and figure out where to plug in some tests or fix bugs by itself or something like that. Have you seen any of that? I just heard rumors about it.

Ben Gold (18:00.676)

You mean AI being able to do it without human supervision? It’s not there yet, but there are, I’ve seen some demos where it’s getting smarter. The thing is that, for example, for customer success, we don’t quite have the ability yet. I’ve seen some demos that come close if you call in, it can answer, talk to you like a person, be able to have a personalized conversation and help you with certain things. But it’s not quite there like being able to replace a customer success rep.

For the engineers, I think that one of the scariest parts of it is that that used to be the most safe job out there, was being a coder and knowing JavaScript and knowing all these languages and that makes you a specialist. And now these AI models are out there, and can produce code.

And they’re starting to be able to make it more and more complicated. So think of it like this, that you have a task. That’s what generative AI is, is doing a task. The next element is called an agent. An AI agent is when you put tasks together. Now, when you put enough tasks together, you have a job description. And that’s where technology is headed.

Vit Lyoshin (19:27.238)

I see. Okay. So it will, as you said in the beginning, eventually it will get there as it gets more training, more use cases, and more practice, if you will.

Let’s talk about the organization level now. What are the biggest opportunities for organizations to leverage AI today?

Ben Gold (19:50.948)

There are so many opportunities for this. So if you think, first of all, most organizations, when they think of AI, they’re thinking of it in their external product. So if I’m a software company, I’m going to put it into my product, and then people will see that externally. Like I have some sort of integration with ChatGPT, or if you’re a school, you might put it into the curricula of the school, of having the students have access to that. So that’s one part of it is the external.

The internal is where it is a case-by-case basis because of the potential of having a centralized database with all of your information. So imagine you’re a company with 200 employees. Imagine having all of the sales data, your customer success data. So of all your customers, where they’ve purchased. If you have, for example, all of your product information, maybe your social media content and social media reactions, if you have that in one centralized place and you had an AI that would be able to analyze all this data and then give best practices, hey, based on the feedback we’re seeing in the market, this is what I think our blog post should be or where we should focus over the next couple of weeks, and be able to give that kind of actionable data.

Looking at sales reps and saying, this is the persona profile. This is how you should be talking to this kind of buyer. And then being able to analyze actual transcripts. And that is the capability right now. You can analyze transcripts, feed it into an AI, and say, what are my next best steps? Help me write my next email. Help me come up with a proposal that they won’t say no to.

So this is a part of this automation. And think of it as a centralized spoke wheel with different spokes, putting all the data into that place and then giving all the different business units access to it to be able to train on this data set for their particular use cases. That’s the future. Not a lot of companies have gotten there. Some of them have, let’s say, put together a data lake for their financial team to do a financial analysis. But don’t share that with the other members of the company.

And that’s where it really is a cultural shift of getting people to say, hey, let’s have an AI policy. Let’s first of all, get everybody to the table and talk about this. And the first thing you need in place is a policy. The policy will determine what is generative AI tool we’re going to allow. So you have to give the employees access to ChatGPT or Cloud or Microsoft Copilot.

If you don’t, then they’re going to go outside and they’re going to use their personal devices. They’re going to put private company information, get an output, and stick it back into their work because it’s so much more efficient. So come up with AI tools that are acceptable, and put security and privacy in place. And then once these policies are there, you can then be able to work business unit by business unit on how you’re going to leverage this.

Vit Lyoshin (23:11.814)

So where should this start? Is it starting with the leadership or somebody within the management level is the best suited for trying this out? And when you start working with these companies, what do you recommend?

Ben Gold (23:28.996)

I recommend it’s C level. The C-level leadership has to make some, the reason is this, that there are some security and legal issues that really do need to have C-level intervention. And this depends on the size of the company. If I’m a real estate brokerage with 10 independent agents, then each agent can kind of do what they want to do because they are not dealing with personally identifiable information, but it’s not like it’s being housed in a central place and employees have access to it.

So as companies get bigger, they have their client data, it’s up to the owner to say, we need to implement this, but we need to make sure that we do it in a way where we don’t pass any social security information or credit card information or biometric information, health data. We don’t want that to go outside of the company walls. Or if we do, it’s got to be through a very secure connection with a chatGPT enterprise or Microsoft Copilot enterprise. So these are the criteria.

So that has to come from the C-level suite. It has to be coordinated with IT. And once you get there, then you can talk about the cultural shift of figuring out who in each business unit is excited.

Like for this, with the school, I was talking, I said, this person is in a math division. He looks like he’s really excited. This lady over here in the humanities, she seems to be that excited person. Let’s have them play a role. Because once you determine what the tools are, you need to decide, what are you allowed to do? So the policies are really important. Because the policies will say, these are the tools you can use. These are the tools you can’t use. This is the data you can use to train. The data you can’t use. This is how you disclose it to our clients.

So sometimes companies, especially for example, marketing agencies, need to disclose or they want to disclose how they use AI. And that’s all in the policy is we want to let people know what we do, what we don’t do. That gives, then you’ve got this clarity because if you don’t do that, then every new employee is going to bring their own AI to work. And you’re going to have a much more danger of data leakage when you don’t have a policy. But from the policy, you can then begin to create A-B tests, go business unit by business unit.

For example, with a sales team, there’s a bunch of tools that do lead generation, that do lead enrichment at scale. Then, do hyper-personalization at scale. These are great AI tools, but you definitely want to make sure that you’ve, just like any other tech stack, you have a policy, we’re allowed to use these tools, then you can evaluate the tools against each other.

But I’ve said this to some of the companies I’ve talked with that have not moved forward. I’ve said to them, you have to do this because this is going to make your employees 80% more efficient. Okay, your sales reps are going to cut. So let’s say 80% of the manual tasks. Let’s say overall 40%. So in a 40-hour work week, I could shave 16 hours off of a typical sales rep, because they spend three hours prepping a deal. They spend all this time taking meeting notes and writing emails. I said, if you deploy this, you’re going to have an 80% efficiency gain for these tasks.

And if you don’t, there’s going to be either one of your competitors that’s going to deploy this, which is going to cut their costs down and going to make them, they’re going to become more competitive. And if your competitors don’t do it, there’s a startup in your space right now that’s starting from scratch, that’s going to be a hundred percent efficient, AI first, AI native, that’s going to build from the ground up a service that’s going to be half the price of yours and it’s going to eventually take your clients away.

So you’ve got to, this is how you got to think about this because you can’t just ignore it and go la la la la la and not take action on this trend.

Vit Lyoshin (27:56.742)

I think it happens every 20 years or 30 years or something, some new technology comes up and everybody who jumps on it, they survive and the rest have to either catch up or that’s it, they go bankrupt.

I was also thinking while you were talking about all of that in the product management case, how probably companies can start using the usage data from their products to kind of see, because there is a problem in product management. When you talk to people, they may not be completely honest with you. I’m not saying everybody, but some people are trying to tell you what they wish instead of what truly is. And those types of tools can help you really get to the truth of how they’ve been using the products and what they really need and what they really want, which features are not used at all. And it will all help you to optimize your products and remove unnecessary stuff and focus on the things that are truly being used, right? That brings revenue and things like that.

I think if people really think about it and If they are not C level already, convince them to start using it. Or if they are, just create a policy, like you said, and allow people to use it. Because I know so many people use it on the personal level, even within companies. So it has to be organized somehow.

Ben Gold (29:43.908)

Well, first of all, those are some really good points that you made there. And absolutely, I wanted to add on top of this, this idea that when you come to work, think of AI as a tool. Okay, the best way to do it, like you mentioned every 20 years, although this is, I think a little different, but think of when the internet came out. So simply saying, we turned on the internet. Okay, well, that’s great. But the internet is not the internet.

It’s not getting a connection with AOL or AT&T today, it’s email, it’s social media, it’s your website, it’s your voiceover IP, it is all these other things that it’s the same thing with AI that it’s not just, hey, we got a connection to chat GPT for one of our products, we’re covered. No, it’s content creation, it’s video creation, it’s ideation, it’s data analysis.

AI is like any facet of creating, managing, leveraging, or ideating, using data to come up with really good feedback loops. This is now available. It will take a couple of years for this to get deployed throughout the corporate structures around the world. It’s not going to happen overnight. The larger companies will need more time to deploy it. But it’s important to say that if we did not have a single improvement starting tomorrow. So let’s just say for the next 10 years, we freeze AI research and we say there’s no chatGPT or no nothing, you cannot release anything better. It would take us years to deploy what we have today. And the efficiency gains are going to be astounding just with what we have.

Okay, that’s just to get to that point but yet, what Sam Altman has said to us from ChatGPT is that when his 5.0 version comes out, you’re going to look at 4.0 as being a baby step, that it’s supposedly this much better. And I’m like, I don’t know when it’s this much better. Are we at AGI already, at artificial general intelligence, or is it going to be a step slightly below where the sales rep might not quite be able to do outbound phone calls? But, you know, that’s the question everybody’s asking for 2024 is will chatGPT 5.0 be that how far of a step up will it be?

Vit Lyoshin (32:20.134)

Yeah, that’s become really interesting.

So we mentioned ethics and bias already slightly. How to make sure that when AI in a particular company, let’s say, learns all this information about customers and about internal processes and how people use the information, how to make sure that it actually is being ethical and not trying to cut corners and create something shady and things like that.

Ben Gold (32:54.852)

That’s a great question. And it’s not a simple answer because bias already exists. There are power structures that have existed over hundreds of years. And when you’re training data, when there was racism, wars, ethnic cleansing all over the world, and this data gets put into these large language models, it becomes a challenge to be able to say, well, how biased is this?

The way that I recommend organizations to navigate is to go use case by use case. So for example, customer success. That’s going to be the least controversial. Let’s just say I’m a t-shirt manufacturer and somebody wants to know, how much does this t-shirt cost? Well, you can use a database. And if they say, well, what’s the delivery charge? I want to get 100 copies. And if I do this, it would be able to process and have a normal conversation.

Where this gets a little bit challenging is when you get into areas where you have legal constraints. So for example, in resume screening, there’s some examples where Amazon, for example, created this AI model to screen resumes. The problem was that 85% of the people that were trained in the model were male. Therefore, it learned we want to hire males. So anyone that came from a female school or showed some sort of feminine capabilities was automatically deleted because the algorithm processed data that was already biased and therefore amplified that bias.

So when we’re talking about putting this inside of an organization, you want, number one, to look at the company that is providing you the technology. You want to make sure that that company has an algorithm that’s been tested. The second thing is you’re going to want to always double-check it. So for example, if you are going to use it at the human resource level for either resume screening or promotions, so as you’re helping people, as they improve their career within the organization, you’re really going to want to double-check it, and make sure, hey, here’s 10 resumes. I just want to make sure that they were rejected for the right reasons, and that this person didn’t have this qualification. We need C++, they didn’t have C++, therefore that was a deal killer. 

I was reading this really good book by Salman Khan, who’s the founder of Khan Academy, and he had this really interesting argument that AI is always going to be biased. The question is, is it more or less biased than current methods? Because even when you screen candidates, there’s human bias. There’s all these things that are at play. Therefore, the question is, does AI get us a better result? And I think that’s a better question to ask. Because if you say, is AI biased? Of course, it is. It’s made by humans.

Vit Lyoshin (36:09.41)

I was also thinking that when we’re talking about personal interactions and connections, that first impression matters a lot, right? When you first meet a person, their appearance or maybe how they talk, how they behave, all that gives you an idea of who you’re dealing with, basically.

But with AI, they don’t have that. They’re just analyzing the data, but the underlying data is just the statistics that are behind it. They just figure out how close it is to this case, or how close it is to that case. So it is very interesting how we can somehow maybe retrain the same model on some baselines, basically, that are kind of as neutral as possible. I don’t know.

Ben Gold (37:01.988)

Well, I think, see, one of the things that I’ve had this discussion with a couple of other HR professionals, my personal opinion again, is you be very careful on using AI for screening candidates resumes. But I would use AI for once you have the interview, taking the interview transcripts. Now you’ve had interactions. You have a full, you have the full dialogue. You can begin to compare that against company values, against the culture, against, for example, what are the job requirements and what are the expectations. I think that’s a time when you could have an AI opinion.

So you would have the human opinion, of whoever was on the call. But I think having a third AI analysis to be able to say, you know what, these are the things that you humans missed because you’re so focused on how the person was talking, you might have missed a capability that we’d like to bring to your attention. But you don’t want the AI to be the one that makes the final decision, yes or no. But I think you would want to have it be incorporated in the overall decision-making.

Vit Lyoshin (38:16.55)

Yeah, that makes sense. That’s like having a conversation and also collecting some data from surveys from a number of people and making decisions based on both. Again, from the product world, I guess, the example.

So what about deep fakes? What are they and how can they, you know, are they dangerous or how can we deal with them?

Ben Gold (38:43.812)

Yeah, they’re pretty dangerous. So that is, that’s one of the things that we’re going to see. And I’m really not excited about going through this election season because I’m just waiting for it. I’m waiting for that video of Joe Biden kicking his dog or Donald Trump saying something that is, let’s say not, that would not, that anyway, that is, that is generated via AI.

The deep fake capability is a big deal. For example, all I need is 10 seconds of your voice to replicate your voice. And when you think about that, what could somebody who has nefarious goals do with your voice? They could go to your bank account. And if your bank, so recommendation, if you have a financial institution and you’ve enabled voice activation, cancel. So that’s number one. You don’t want someone to call your institution and get into your financial accounts.

The second is you can think of getting a phone call with a generative AI bot that calls you and says, hey, it could be from a son to their father and say, dad, I’m in trouble. I’m in the hospital. Can you wire me $1,000 here? And that could sound convincing hearing for the first time an AI-generated voice. Or you could lure somebody unsuspecting that a woman thinks they’re meeting their boyfriend or husband, and they’re being lured into a place in a very dangerous situation. So these are the kinds of use cases.

I think the one that is very famous is this Asian company where a gentleman was tricked at a Zoom meeting. They actually created a fake Zoom meeting with all the executives in the room and he transferred $25 million to a fake shell company, however, that is. And anyway, that’s like an extreme example, but it’s coming. And my recommendation is don’t believe anything you see, don’t believe anything you hear. Get a second opinion before you make a decision, especially if you’re being asked to do something quickly and it doesn’t seem right, it probably isn’t.

Vit Lyoshin (41:13.734)

We used to get spam emails, I remember, like 15 years ago saying, like, hey, you get your relative died or something and they leave you all this money or property and give me information. I mean, I know a lot of people getting scammed like that still, but I think everybody pretty much learned that trick. This is the new era with deep fakes now.

Ben Gold (41:36.324)

Yeah, it’s no longer the email from the Nigerian prince. That’s not what this is, speaking bad English. We’re in a new era right now of AI fakes. The other thing that I didn’t say is that there’s a company called HeyGen, and I got an account with them. It’s like 25 bucks a month. And for two and a half, I gave them two and a half minutes of me just sitting in front of a camera, speaking slowly and now I can make any video I want with my voice.

Okay, so I can now type in text, it takes the video and the voice that I created, and it will now make a video that says anything you want. That is really, think about how powerful that is, that if you’ve got two minutes of somebody sitting in the same space like right now, and then you use that to train the model, and even though they technically have certain kinds of guardrails. Like they videotape you, make sure you’re the person that’s asking to do the deep fake. But still, think of how powerful that technology is.

Vit Lyoshin (42:50.182)

Yeah, I mean anybody can record you anywhere. Like if you go to the store or go to the metro or whatever and while you’re doing that they’re recording you and then creating all sorts of fakes. That’s kind of dangerous. Like your iPhone becomes really like a hacker tool or whatever.

Ben Gold (43:09.668)

It is and the amount of information on your iPhone is incredible.

Vit Lyoshin (43:14.598)

Yeah, well, I guess for good reason I’ve not picked up my phone for the last like two years now. And if somebody important calls me, they know that they need to send me a message or leave me a voicemail. So many calls that, you know, I don’t know random people calling me asking for something. I gave up. I just never pick up my phone anymore.

Ben Gold (43:39.94)

You know what’s funny is that I’m finding random people are texting me all the time. And it’s like, and I’m at the point where I’m having a lot of fun with that, where they’ll say, hey, I’m coming over for dinner. What do you want to eat? And I’m like, and I would be like, and I just pretend like it’s somebody I know. I’m like, I’d like to have fish and chips. What about you? And then they’re like, you know, looking at me like I’m weird. But my point is that you get all these weird texts where people are trying to start conversations and then they’re going to scam you. So I just kind of, I don’t know, it’s just a weird thing of the world we live in today.

Vit Lyoshin (44:21.286)

Yeah, those people find creative ways all the time.

So what’s for the future of AI? You kind of mentioned a little bit about these next steps of technical evolution, I would say. But what in terms of use cases for AI? Are they going to help us build, like, I don’t know, flying cars or go to other planets or anything exciting like that that you heard about?

Ben Gold (44:48.196)

You know, I have not thought along those lines. I mean, I know that, for example, if you’re just taking, for example, where video generation is coming, where we move from on-demand videos to on-demand custom videos, where you say, I want to go with the first Star Wars movie, New Hope. And why don’t you add me as a character? I’m flying next with Luke Skywalker. And you take my picture, my voice print, and it creates this action scene where this could create an entire movie that now has me sitting next to young Mark Hamill. That’s the kind of thing where I think we’re going to go from an entertainment point of view, where people are going to demand the level of customization. Why would we watch an old movie when I could make something that is just for me.

I’ve talked to my son who is an electrician. He’s just finishing up his journeyman internship or apprenticeship. And I’ve said to him, well, what do you feel about robots taking over your job? And he’s like, well, first of all, he goes, I love it. There’s a half a million unfilled positions for electricians in the United States.

And so when you look at the fact that we’re creating these generative AI tools that can perceive, they can see, they can understand objects around them, we’re putting these humanoid robots, that’s going to be one of the next frontiers is deploying humanoid robots in dangerous situations. And he was saying, I would rather call my manager and say that one of our bots got killed or got destroyed by an arc flash than call the wife or the husband of the person and say, you know, this person died from this.

So that is one of the more optimistic elements here of how AI will be incorporated into physical robotics and physical humanoid robots. We will probably have fast food jobs that will become replaced by robots that can figure out how to make all this stuff. And they’re going to slowly infuse that inside of our society.

Now, again, this gets back to the point that nobody really loves to do the, you know, everybody starts at McDonald’s, but it’s not their dream job. And what happens when robots and when AI is doing all the white-collar and blue-collar jobs, what are humans going to do? Are we going to have a situation where we can just hang out and have robots? Be on vacation, have robots give us pina coladas on the Mexican seashore?

I mean, is that the future? Or is it something where you’ve got certain corporations that have got higher concentration of power because they own this so that the employees that are working and the companies that are still around that haven’t been destroyed by AI there might be this imbalance of resources, that’s kind of the dystopian future. And it’s really hard to tell because there’s just a number of things that nobody has a crystal ball, but it could go either way.

Vit Lyoshin (48:19.622)

Yeah, that’s very interesting. A lot of science fiction movies come to mind starting from Terminator movies to Westworld and you can imagine any sort of future that is coming. Either people just stopped working and everybody has their own personal assistance to do everything or we still will work somewhere and we’ll still do some things and AI and robots and things like that will just help us and assist us. We’ll see. That’s very interesting.

Ben Gold (48:56.74)

It’s funny because I actually think Terminator and the other movie that comes to me is I, Robot. I don’t know if you ever saw I, Robot, but I think that one captures this idea of a human society dependent on machines doing all the jobs. And that was, I think, really, like when I watched that one again and I’m like, whoa, that’s not, you know, this far future. Ironically the movie takes place in the year 2030, and I think we won’t be too far away from that by 2030. So it is very weird how that’ll happen.

Vit Lyoshin (49:32.294)

Yeah, I mean once the companies like Boston Robotics try to integrate with AI, then we really will see these robots become really smart and really capable of doing things. Until then I think robots are just robots, AI is just AI, but when they are put together, that’s where it becomes interesting.

All right. Well, it’s been a lot of interesting comments and a lot of interesting information from you. Thank you very much for your time. I learned a few things and I think this is going to be really interesting for people to hear. Thank you very much.

Ben Gold (50:16.964)

Yeah, thank you for the wonderful conversation.

Vit Lyoshin (50:19.942)

I hope we can talk more in the future. This is cool stuff. Thank you. Talk to you later.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

About Vit Lyoshin

Hey there! I'm Vit Lyoshin, and I've been working with technology and cool software stuff for a long time. Now, I'm hosting a podcast where I talk to really smart people who know a lot about making software and managing products.

This podcast is all about helping you understand the tech world. I'll have interesting guests who share ideas that can make a difference.

Subscribe to hear cool stories and learn new things. Your thoughts are important to me, so let me know what you think about each episode.

Thanks for joining me on this fun journey!

Newsletter

Sign up now for the future Newsletter! I'm not sending anything yet, but I'll keep you informed when I launch the Newsletter.

Newsletter

Sign up now for the future Newsletter! I'm not sending anything yet, but I'll keep you informed when I launch the Newsletter.