Tessa (00:00):

If good people don't use it, bad people will. And I think that stands super true. I mean, if you start educating students on it now, it could be super powerful and I think it's just opening up your mind to the possibilities that AI could hold.

Voice Over (00:14):

You've tapped in to the Divine Spark Podcast from Paulsen. Join us as we explore a refreshingly human first approach to using AI in marketing, business development, and more in the ag, energy and rural sectors.

Sara (00:31):

Hi everybody, I'm Sara Steever, CTO at Paulsen, and today with me is Tessa Erdman. Tessa is our newest employee at Paulsen. We thought that would be a great way to kick off this podcast. She's an account coordinator here and she actually interned with us over the summer. We've had a lot of fun talking about things and working together on our AI council. So welcome Tessa.

Tessa (00:52):

Thanks for having me.

Sara (00:53):

Yeah, it's great to have you here. So Tessa, you've had a little bit of time to be here and you've been exploring a number of processes and things that we do here and just getting used to the agency and frankly used to this idea of the divine spark and human first. So what does that mean to you? What do those two things mean to you?

Tessa (01:13):

I absolutely love the fact that we incorporate the divine spark and human first into ai. Like you, I was super hesitant when AI was talked about because as most people would say, it's going to take over our jobs. But in reality, like you said, mentioned in that first pilot podcast, if we learn how to use it correctly, you can use it as a tool and good people should be using it, not the bad ones. It's that connection that you can make with people. As artists, that's what our whole work is about. And we can see through normal generative ai, there's not that connection. It's like a robot's talking to you. And so I think our different perspective, human First Divine Spark really brings that connection point and makes it more personable, which is what all readers, listeners, anybody wants when they're looking at any of our work.

Sara (02:07):

Yeah, I love that. I love that connection piece that is important when you're trying to be authentic and really reaching your audience. We know if we don't connect with them, our work fails. Our work is absolutely never as good. So let's dive in and talk about the tool that you've been working on. I know you and I have had a chance to discuss it before, but why don't you let everybody know what you've been taking a look at?

Tessa (02:32):

Yeah, absolutely. So I've had the opportunity all throughout my summer to work with a variety of tools, but by far my favorite one is Perplexity. Simply it is just super user-friendly and I think it can answer any of the questions that I ask it, whether it's competitive research, helping me come up with a headline for an article, maybe figure out how to rewrite a word. But it's just overall my favorite tool to use with pretty much anything and any of my that I have,

Sara (03:00):

I've also used Perplexity. I think it is a great tool. Can you talk a little bit about how Human First ties into how you might be using that tool?

Tessa (03:08):

I think that it ties in super well. I think Human First, it talks about making the experience user-friendly for those that use it. Like I said, I think Perplexity is a user-friendly site, but I also think that it does have a little bit of a personality to it, which is exactly where we talk about Human First. As I mentioned my definition of Human First, ai that Divine Spark, it's about making the connection with the reader, whoever is going to read your article or look at your graphic or listen to your competitive research, it's about making that connection and that's why we are artists. And so I think that Perplexity kind of has that touch to it as well.

Sara (03:52):

So Tessa, you and I have both been using Perplexity and I do find it as you do really, really useful. I think it's a great tool. The other day I asked it a question about what's the most recent. The thing I like about Perplexity of course, is it's current events. You can ask it current events. So I asked it what are the most recent changes at Meta? And then I just backspaced out and replaced it with Amazon and it gave me the exact same answer, literally word for word, the exact same answer.

Voice Over (04:21):

Interesting.

Sara (04:22):

So I replied back, that is exactly the same answer you gave me for Meta and then it apologized to me for like, I'm sorry. Yes, I did. Here's the real answer. Part of I think the human First and Divine Spark is that your own domain expertise is pretty key to making sure things are working the way you're supposed to. Can you talk about that? How do you vet the results you get?

Tessa (04:45):

I think it's just talking to any of us. I mean it went south in that situation. The cool thing about Perplexity is that it kind of tags on its sources, so you're able to fact check it a little bit. Sometimes it's going to get it wrong, but that's kind of how we're working through ai. But the cool thing about it is that it gives sources and it gives a variety of sources to you can link right back to wherever they got that information.

Sara (05:09):

Yep, yep. I think that's true. So that does help a lot unlike just using Chat GPT where you don't necessarily get, although it's starting to add I think sources in a way that it didn't use to do, but

Tessa (05:22):

Yeah,

Sara (05:23):

Super helpful.

Tessa (05:24):

Definitely.

Sara (05:25):

Yep. So what do you think there are any of the downfalls of a tool like perplexity? Where do you find their shortcomings that you think it's got?

Tessa (05:36):

I think just with, the more you use it, the more knowledgeable you get and how you can ask those questions. So we would word things differently than somebody that wasn't maybe specifically in advertising or marketing. And so it's just asking those right questions as we're trying to figure out how AI works. I think that's maybe the biggest downfall of it because like you said, it answered the same question twice and maybe people won't catch that it didn't answered the same question twice. And so I think maybe that's just the biggest downfall.

Sara (06:08):

Yeah, one thing I've noticed in conversations with people is I've had opportunities to go out and speak. People come up and tell me stories where they know of a colleague or someone in their sphere of influence who did not double check things and didn't check for the hallucinations and the things that LLMs can do, and it ended up biting them in the end. So it is really handy to have, now I haven't heard whether or not Perplexity ever makes up any of its sources. I know that used to be an issue, especially academically. I've heard kind of horror stories about the whole peer reviewed thing where AI is generating the report, it's also generating the sources that it came from and without a human being intervening in there and just making sure those things are actually real. Depending on the research, that could be really bad. It could be be a bad thing. Those research pieces tend to get picked up and cited and cited and cited and definitely need that human being. Intervening

Tessa (07:09):

Cases,

Sara (07:10):

Especially

Tessa (07:11):

Rural America. I mean there's so many misconceptions and if we just keep using AI and it messes up, that's more misconceptions that can happen.

Sara (07:20):

That is a wonderful point because we have been challenged in agriculture for a long time to make sure we're getting the true story told. And so if you think about all the stories that are not true that are out there about anything from our agronomic practices to how we raise animals, all those things, then those could just really be perpetuated in a bad way. Absolutely. Good way. Yeah, it's been hard enough to tell that story accurately. We don't need it in an LLM

Tessa (07:49):

For sure. Absolutely.

Sara (07:51):

So what have you found then to be pluses on the perplexity side that compared to maybe other LLMs you've tried using Chat GPT or Claude or

Tessa (08:00):

If you look specifically at chat, it's kind of basic. I think that perplexity maybe takes it a step further. It's more accurate in my opinion, but compared to other different AI tools, I think that it's just easier to run. And I think that simplicity behind it is going to make it a better tool for somebody that maybe doesn't understand it as much as we do.

Sara (08:26):

So Tessa, you are just wrapping up your college degree and right towards the end here, chat, GPT and all the other LLMs hop onto the scene. Could you talk about what it was like to be in school and how did your professors respond to it? Were you encouraged? What was your experience with that?

Tessa (08:45):

Being Ag Communications as my degree, I'm in a sector of ag classes specifically and then communications classes. And so in those ag classes they put a big red X over any ai and so we don't really talk about it, but when we go into communications, my communications classes, it's kind of brought up, but not really. They ask us our opinions and usually the ag kids talk about how the misconceptions happen from using ai. Other than that, we don't really go past it. So coming into my senior year and still not really talking about it in classes was kind of crazy. And then coming to Paulsen and it being such a huge component. I mean I've sat on the AI council, I've had the opportunity to do a project through AI that kind of blew my mind. Just being able to use it as a tool and how we view it, I think is really, really cool. But it's interesting.

Sara (09:45):

So if you were going to go back and talk to your professors, what would you say to them after the experience you've had here?

Tessa (09:52):

I love the part where we talk about how if good people don't use it, bad people will. And I think that stands super true. I mean if you start educating students on it now, it could be super powerful. And just trying to figure out how we can use that divine spark or the human first ai. I think if we talk about it and have more conversations about it and just allow us to use it, have fun. I mean, my first day at Paulson, I was assigned a project using AI to write an article first, write an outline, and then use it to write an article. And I was like in shock, why would we want to use AI to write it? But essentially it was to make me learn that AI is a tool and it's okay to use. And I think that's a lot of students don't understand that.

Sara (10:41):

Yeah, it makes me wonder for our universities how long it'll take them to catch onto that. And maybe you need to go back and talk to him. Tessa,

Tessa (10:49):

I am in speed.

Sara (10:50):

Definitely. So can you talk about how we find our checks and balances here? I think that might be an interesting thing for our audience to know. We don't just use it and copy paste it out and publish it, but what's the process that we go through to make sure it's okay?

Tessa (11:05):

Yeah, so I'll kind of talk about the project that I did and how I went through it. So first we were in a webinar and we were trying to give kind of an overview of what the webinar was for our clients. And so we used a transcript and we were able to put it into perplex, I believe it was Perplexity or chat GBT for that project. But I put it in there and it was able to spit me out an outline and I was able to edit things, kind of things that I didn't think were very important or keep the things that I thought were important in there. And then I was able to ask it to help me write an article using both the transcript and the outline. And then from there it's just rereading the article, deleting the stuff that maybe doesn't seem like something I would've said, but just kind of putting it in my words and figuring out how to use what they or what was given. But yeah, put it in my words.

Sara (12:00):

So you're just coming off of school, so you've been doing a lot of learning over the last four-ish years. Can you talk about what you think would be a good way to approach all of the learning that comes along with using a new set of tools like generative AI is bringing us? What would your approach be?

Tessa (12:19):

Letting us use it. Start by just doing a simple project. I wrote that article or having us find do a search, just having us do a basic search and just giving us the opportunity to try and talking about what we like about the tool, what we don't like about the tool, and being able to try a different tool. I think just giving us the opportunity to give it a go with zero parameters just to try it out would go a long ways. I know it did for me.

Sara (12:52):

Yeah, I have heard that when I've been out and talking about it as well. Some of the people who are most hesitant are the people who haven't ever used the tool. So I think once you do use one or two or the more of them that you use, the more you realize, well, they're not necessarily that good unless they've got good input from you. If you don't know how to write a good prompt, you're probably not going to get good stuff out of it. And I think it also is just the unknown, the fear of the unknown when you don't necessarily understand what it's capable of and what it's frankly not capable of. Yeah,

Tessa (13:26):

I would agree. And just walking in with that human first mindset is huge because I think that could change all the students' viewpoints on it.

Sara (13:37):

So now that you're at Paulsen and you've been working on a lot of different projects and you've touched several different kind of different departments at the agency, one of the things you've been helping with is research. Can you talk a little bit about how you think it might be beneficial for research?

Tessa (13:50):

Yeah, absolutely. Another project that I had this summer was finding competitors for a company, a company that I didn't know very well. It was a sector maybe of agriculture that I wasn't super familiar with. And so I was tasked to find the competitors and do an analysis of them. My question was how did I even know who are the competitors? And that's not just a Google search that you could do. Perplexity was able to point out the top five competitors for that company. And I didn't keep going off of perplexity for the rest of it because I went through their website, I went through their social media, but it gave me a starting point. And I think that's something valuable too, to know you don't have to use AI for the entire project, use it as a starting point as a base. And I think that can also be super beneficial.

Sara (14:44):

Yeah, good point. And actually I think Perplexity would be a good double check for tools like SEMrush that we use. Some rush will go and find, here are the five competitors for this particular website that you're trying to analyze and they're not always accurate. Oftentimes the algorithm will pull in something that's frankly completely unrelated. So good, fast backup to that can be really helpful.

Tessa (15:08):

Absolutely. Yeah.

Sara (15:10):

So what do you think is an approach that you would use to try to find and explore some new tools? Do you have any tips or suggestions for our audience?

Tessa (15:21):

Just sitting in the AI council at Paulsen, a lot of how you find tools is by reading articles, watching the news, honestly, because it's coming up so quickly, but podcasts just staying up to date with people that are involved in the AI world. It's not hard to look up AI tools and they all pop up, so clicking on one link and seeing what's there.

Sara (15:47):

Yeah, I think it's helpful to have some places that you trust that you think are paying attention because you can spend all day every day trying to stay on top of this, and it's just about impossible

(15:58):

To try to figure out what all the new tools are. And one of the things that we talk about here at Paulson is not marrying any one technology just because you can end up making a huge investment and then it's leapfrogged the next day when you wake up. So you want to be able to be portable, I guess, in how you think about the tools that you're going to use. And they seem to have strengths. One of the things that I like to do is use Ideagram for images because Ideagram is not very aligned, meaning it might show you something not highly inappropriate, but just things you can't get out of other lms. But it'll give you a magic prompt what's called a magic prompt, which is very long and flowery and leaning into the language for sure. When it generates the prompt much longer than I would for an image type prompt. And then I take that and pop it back into Firefly, Adobe Firefly and just see what the outcome is there. Can I get a better result out of Firefly And Firefly I like because it's just more integrated into Photoshop,

(17:03):

So that makes it a little bit easier for me to use. But it's just kind of a workaround for a couple things. One is it does a pretty good job at generating images. It may not quite be as good as Firefly. It kind of depends, but I like the idea that I get that prompt because it's this idea of trying to learn to be a better prompter I think is something that's reality for all of us coming up.

Tessa (17:26):

Definitely.

Sara (17:27):

Yeah. And that gives you a good leg up on how to write a good structured prompt for an image.

Tessa (17:34):

Yeah, absolutely.

Sara (17:35):

Very good. Any other final thoughts, Tessa, for us on your journey, on your personal journey for AI?

Tessa (17:44):

I think for me, I walked in with an excitement for AI, but also strong hesitation and I think it's just opening up your mind to the possibilities that AI could hold is something I hope everyone can do.

Sara (18:00):

Tessa, thanks for being on the show today and for sharing your journey, sharing your learnings about perplexity. I think it's a great tool as well. I think we're in agreement on that.

Tessa (18:09):

Absolutely

Sara (18:10):

Appreciate you being with us.

Tessa (18:11):

Yeah, thank you for taking me on this journey.

Voice Over (18:15):

Thanks for listening to The Divine Spark. Visit us at Paulsen dot Agency with any questions or ideas for future episodes.