In our 14th episode of Launch Codes, Joe is joined once again by Matt Tonkin, RP’s VP of Consulting & Partnerships. They discuss:


Listen In


Google introduces Gemini to mixed reviews

Last week, Google introduced its new AI model, Gemini. It’s been a hot topic in the tech community, but not all of the attention has been positive. The controversy comes from Google’s demo video which demonstrates Gemini in action.

In the video, Gemini was responding to real-time inputs, like images and text, suggesting a certain level of intuitive interaction. But some were skeptical. Harri Weber from TechCrunch said, “In reality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like.”

Joe also brought our attention to a tweet from Ethan Mollick which reads:

“We really don’t know anything about Gemini Ultra. Does it beat GPT-4 for real? If so, why by such a small amount? Two options: 1) Gemini represents the best effort by Google, and the failure to crush GPT-4 shows limits of LLMs approaching 2) Google’s goal was just to beat GPT-4”

Matt commented that, once the demo video was broken down and revealed to be edited, it didn’t feel like a genuine demonstration anymore. Overall, he was disappointed and is wondering how this was approved by the Google team in the first place.

Joe responds by recognizing that he can understand the need to quickly demonstrate a product’s capabilities, especially in a short video. But at the same time, he makes the point that a very small amount of transparency — such as a disclaimer line somewhere in the video about how it was edited for brevity — could’ve gone a long way in building trust.

Joe and Matt also discuss how OpenAI, a company with around 700 employees, is able to move more quickly than Google, a company with tens of thousands of employees. “It’s bureaucracy and moving and shifting in the behemoth is pretty hard”, says Joe.


HubSpot’s November release updates

HubSpot released its November notes that included over 40 updates across their hubs, including Marketing, CMS, Service, Sales, and more. Joe and Matt touched on some of the main changes, which we’ve outlined below.

1) Daily Record Enrolment Limit For Workflows in Sandboxes

Matt commented on this one, stating that this is HubSpot’s way of trying to limit the backend processing they have to do. But at the same time, if you’re running over 100,000 people through workflows on your sandbox then chances are your testing processes should be reviewed. Overall this change isn’t really going to affect most people unless they have massive data movements they’re trying to test against.

2) Anomaly Monitoring on Property Updates

Matt loves this one. When something breaks in MOPs, it can be compared to a small pipe leak in a house; you don’t notice until there’s a massive water spot that someone points out to you. This is usually in the form of a salesperson asking you about errors. This is a welcomed feature that will allow more active monitoring to catch errors before they get out of control.

3) Webhook Triggers in Workflows

This is an interesting one as well because it moves HubSpot close to some of the more customizable MOPs platforms like Marketo. Now we can directly trigger a process from a custom app or company login portal, for example. Doing this before would require a convoluted process — this definitely simplifies things.

Read more about HubSpot’s November updates.


Make a ‘marketable’ difference with executable campaigns

This week’s question from the Slack Channel (used with permission from the founder, Mike Rizzo) is: “Is anyone using executable campaigns? We have not used them, but we have some use cases where they can be of value like lead life cycle management and lead scoring. What is a best practice setup?.”

For anyone who doesn’t know, “executable campaigns” were an update to Marketo’s “request a campaign” process which allows campaign workflows to run from another campaign via API, for example.

Matt emphasizes that executable campaigns are great because they execute in a series, not in parallel. In other words, they ensure that one process finishes before the next process starts. It prevents different processes from running at the same time which can cause issues. “If you need to have something happen before something else, executable campaigns are your friend”, says Matt.


AI transparency for marketers

In this week’s segment of AI Navigators, we want to draw your attention to a recent article from HubSpot called, “The Complete Guide to AI Transparency [6 Best Practices].”

We’ve been talking about AI principles and guidelines on the show over the last few weeks and even built a MOPs Advisor custom GPT to help organizations create guidelines, so we want to give a shoutout to HubSpot for bringing this important conversation to the forefront.

A quote we loved from the article was, “Transparency in AI isn‘t just about technology — it’s about aligning AI goals with organizational values, ensuring stakeholder interests are met, and building a culture of openness and

The 6 best practices they outline for creating an AI Transparency model are:

Step 1: Define and align your AI goals.
Step 2: Choose the right methods for transparency.
Step 3: Prioritize transparency throughout the AI lifecycle.
Step 4: Continuous monitoring and adaptation.
Step 5: Engage a spectrum of perspectives.
Step 6: Foster a transparent organizational culture.

One thing that immediately stood out to Matt was that “Foster a transparent organizational culture” is at the end. Matt commented that he thinks this should definitely be closer to the beginning, as one of the first things you consider as an organization — probably right after defining and aligning your AI goals.

Joe responds by commenting that maybe this is a step that spans throughout the entire process, at every step. He also iterates that a “transparent organization” doesn’t just happen overnight. It takes time and effort to lead to a transparent culture, so making it “Step 6” in a list like this feels a bit off.

Another point of discussion between Joe and Matt was about the importance of connecting AI policies to your organization’s key values. There has to be alignment between how you’re using AI and what your company’s values and missions are. There is also the idea of risk being connected as well.

“I strongly believe that there are organizations with so much at stake that they couldn’t risk allowing people just to have free reign and do what they want [with AI]. So I think there’s some connectivity there to what your values are as an organization and what the risk-benefit reward is”, says Joe.


Hot Takes

  • IBM, Meta, and 50 other organizations launch alliance to challenge dominant AI players.
    • Major corporations including IBM, Intel, Sony Group, Dell, Meta, and others are forming an “AI Alliance” along with top universities like Yale, Cornell, and Dartmouth and AI startups like Stability AI.
    • The alliance is seen as a move to challenge the dominance of OpenAI, Microsoft, Google, and Amazon in the AI space.
  • Over 2,000 new martech tools were introduced in the last 6 months
    • Report by Scott Brinker (VP of Platform Ecosystem at Hubspot/Editor at ChiefMartec) and Frans Riemersma (MartechTribe).
    • “The truth is you can build an empire with all the genAI that has been surfacing — and by an empire, I mean, of course, a business.” Frans Riemersma of Martech Tribe
      Generative AI is responsible for at least 73% of the increase




This week, Joe is delighted to return to one of his favorite bands, The Smile, to showcase their latest single “Wall of Eyes”. It connects perfectly to Google’s Gemini demo video and the idea of a “wall of eyes” watching us at all times. It’s a beautiful blue vinyl that will be arriving in January.

Craft Beer:

Matt brought in an extremely colorful can of beer called “Psychedelic Puzzle Factory” from Flying Monkey — a craft brewery based in Barry, Ontario. Flying Monkey has always been known for their extravagant can designs, which they’ve kept promoting despite many other companies falling back to traditional labels. Matt relates this concept of “standing out” back to this week’s story on 2,000+ new march tools in 2023. It makes you think: How can those tools stand out too?


Read the transcript

Disclaimer: This transcript was created by AI using Descript and has not been edited.

[00:00:00] Joe Peters: Welcome to episode 14. On today’s show, Forget Sagittarius, this December belongs to Gemini. Webhook, line, and syncer in HubSpot’s November update. A community question, make a marketable difference with executable campaigns.

[00:00:20] Joe Peters: Our AI navigator segment, seeing AI to I on transparency guidelines. And, hot takes, the AI alliance aims to reboot the industry’s power dynamics. And Martek’s tool rific surge, over 2, 000 tools introduced in the last six months. I’m your host Joe Peters, and today I’m joined by Matt Tonkin. Matt, what are you excited about today?

[00:00:50] Matt Tonkin: More puns, always more puns. But no, I’m excited to talk about Gemini and see, see where Google’s going with its challenge to open AI.

[00:01:01] Joe Peters: Yeah, there’s a lot there. And let me give you a bit of a recap in terms of Google introducing Gemini and having some mixed reviews, and we’ll talk a little bit about that, but I’m going to give a little bit of context and background here.

[00:01:17] Joe Peters: Last week, Google introduced its new AI model, Gemini, and it’s been a hot topic in the tech community, but not all the buzz has been positive. So Gemini comes in three versions. It’s each tailored for specific use cases. The Ultra version has shown exceptional performance, surpassing results on 30 of the 32 widely used benchmarks.

[00:01:43] Joe Peters: But there’s a little bit of controversiality here. Google’s demo video, which was amazing and very, very impressive. It’s titled Hands On with Gemini. Had a little bit of skepticism. There was a bit of a shell game being played there in terms of how the AI was responding in real time to inputs. And so we’re going to have to see what the reality of this is.

[00:02:10] Joe Peters: A one Harry Weber from TechCrunch said, in reality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent What the interaction is really like. And then, we got a tweet from Ethan Mollick who, uh, is at Wharton and is yeah, I have a lot of time for him.

[00:02:36] Joe Peters: He, he, his takes are pretty interesting. And so just one thing from him and then Matt, you and I can dive in. We really don’t know anything about Gemini. Does it beat GPT 4 for real? If so, Why buy such a small amount? Two options. Gemini represents the best effort by Google and the failure to crush GPT 4 shows limits of LLMs approaching.

[00:03:04] Joe Peters: Google’s goal, so the second being Google’s goal was just to beat GPT 4. Three, whatever the secret sauce that
OpenAI put into GPT 4 is so secret and so good that other labs cannot figure it out and can’t catch up. Or for Gemini represents Google’s best revolt, there’s effort, but their ability to
train a good model is limited.

[00:03:29] Joe Peters: Anyway, there, there’s a lot here, but what are you, what’s your take on this so far, Matt?

[00:03:36] Matt Tonkin: Yeah, it was interesting because I watched the sort of demo hands on demo video. And while you’re watching that, you know, it’s very impressive. It seems like it’s doing all this analyzing in real time with.

[00:03:47] Matt Tonkin: Essentially zero additional prompts being generated on part of the user. And yeah, when it’s, when it’s broken down, it’s to me, it’s not a demo video anymore. It’s really here’s our future goal at some point in time, which is really disappointing because you have to be open about that. You can’t just like make it seem like it’s going to do a whole lot more than it wants.

[00:04:12] Matt Tonkin: Yeah, so I was, I was a little disappointed in that and it’s kind of a weird. Weird choice from Google. I don’t know how that got like rubber stamped to go out in that format. I guess it’s a lot more impressive looking than you know, having to iterate 12 prompts, even though that’s the reality of where we’re

[00:04:29] Joe Peters: at.

[00:04:31] Joe Peters: Obviously, there is a bit of expediency in demonstrating how it works. So it can be a bit Important to show a flow and not have all the legs, especially in a short video. I get that it could have been a little line of in the video. This was this was edited for brevity, you know, sometimes. A little bit more transparency there could have gone a long way.

[00:05:03] Joe Peters: And ultimately, it’s not that big of a deal. It’s just condensed what our experience is going to be. And I don’t think that takes away from the impressive AI vision elements.

[00:05:15] Matt Tonkin: No, definitely not. And, you know, I still think it’s definitely And the idea that it barely beats GPT 4. Maybe that’s true, but I don’t know if Every single implementation of new AI has to be this huge leap forward.

[00:05:33] Matt Tonkin: I think technology is much more incremental than people give it credit for. So, so I think, you know, it’s okay if we start seeing companies. Laddering against each other rather than a giant leap forward, every single new

[00:05:47] Joe Peters: piece. I think you’re touching on something which is really important. And that’s the idea of, is this.

[00:05:56] Joe Peters: Is this incremental growth or is this exponential growth? And if you’re talking to the AI enthusiasts, they’re projecting this exponential growth curve, which then is counter to incremental. And I think that’s where we’re, we’re trying to understand our brains really are used. We’re used to incremental growth.

[00:06:23] Joe Peters: We got an iPhone and then we didn’t instantaneously get the iPhone 15 after the first gen came out. It’s not. We had some increments along the way. We’re used to moderate improvements in what we’re doing. Some, some of them are earth shattering. When you change from your Motorola flip phone to an iPhone, it was a pretty big jump, right?

[00:06:46] Joe Peters: But in this instance, in thinking about Where things are at and what, how Gemini was, has been delayed so many times now. I really wonder where what’s happening at Google, like a company of 700 people is outperforming a company with unlimited resources, some of the best minds in this space. And I find that really surprising.

[00:07:23] Matt Tonkin: Yeah, and you get, there’s always the difference between sort of that startup mentality, like, get it done, get it done, and, you know, somewhere with Google where a lot of their biggest moves in the last decade are acquisitions more so than, you know, in house. Not to say that Google’s not doing anything in house, but right, like, there’s a certain scale at that point.

[00:07:49] Matt Tonkin: I think can play into this and just the, um, and maybe it’s the wrong term, but bureaucracy of a larger company, not that open AI wouldn’t have that at its scale as well.

[00:07:59] Joe Peters: No, it’s 100 percent the right term. It’s bureaucracy and moving and shifting in the behemoth is, is pretty hard. And DeepMind was an acquisition that, that didn’t just come out of.

[00:08:17] Joe Peters: Of Google. So we have a lot to, to figure out here. There’s a lot more to the story. The AI wars are going to continue and it’s going to be entertaining for us and I think going to be exciting as a time to be in technology and what that means in terms of uh, what we can do and, and the sort of limitless potential that we can see in our future.

[00:08:44] Joe Peters: But let’s switch gears here a little bit, Matt, and move over to the HubSpot November release updates. And there’s a few things here. So, HubSpot released its November notes, and that included over 40 updates across their hubs, including marketing, CMS, service sales, and more. Here are a few of the highlights.

[00:09:06] Joe Peters: Daily record enrollment limit for workflows and sandboxes. So they implemented a daily record enrollment limit for workflows in sandboxes. Got you. Double bullet there. Sandbox users can enroll up to a hundred thousand records per sandbox account per day. And before this change, sandbox users could enroll an unlimited number of records per sandbox a days, which is kind of strange, but we’ll get to that.

[00:09:34] Joe Peters: Yeah. Let me cover the other two as well. Anomaly monitoring on property updates and that is in regards to the update volume across the CRM using HubSpot AI. This includes a new section on data quality command center that monitors properties and takes actions and users will also have the ability to subscribe themselves and other users to notifications triggered by anomaly issues.

[00:10:03] Joe Peters: And then the last area is on webhook triggers and workflows. And so the webhook triggers provide you with the flexibility to pull data in from your third party systems apps in order to trigger workflows directly in HubSpot on the third party data. Being able to trigger from a webhook will solve a variety of automation use cases for you to automate from your third party data.

[00:10:29] Joe Peters: Now, Matt. Decode some of

[00:10:31] Matt Tonkin: this for us. Yeah I mean, first and foremost, I think the daily record enrollment limit for sandboxes. That’s, that’s just HubSpot trying to limit the amount of backend processing they have to do. But also, if you were running over 100, 000 people through workflows on your sandbox, maybe your testing processes should be reviewed.

[00:10:52] Matt Tonkin: I get maybe some situations where you need to do large scale testing to see if there’s any fault points based on throttling or anything like that. But I doubt this is going to affect most people unless you have, you know, massive data movements that you’re trying to test against. I love the anonymy modeling and monitoring.

[00:11:12] Matt Tonkin: And the reason why is as a mops person, usually when something’s broken, it’s kind of like a little, A little leak in a pipe in a house and you don’t notice it until there’s a big water spot and someone points it out to you. Usually that’s in the form of a, you know, a salesperson or someone saying, Hey, what’s with all of this?

[00:11:33] Matt Tonkin: And by the time you get to it, you know, you’ve got to dig back a little bit. So I like the idea of something active monitoring and, you know, seeing weird things happening in data before, you know, it gets out of control and you can remediate that a lot faster. So I do like that. The webhook triggers is an interesting one.

[00:11:51] Matt Tonkin: I think it, it moves HubSpot a bit closer to some of the more customizable and integratable mobs platforms like Marketo, for instance, where, you know, by doing this. We can directly trigger a process from say a, a custom web app or a, you know, an application that a company has its own, its own portal, its own login.

[00:12:11] Matt Tonkin: Whereas before you would need to kind of do some convoluted processes to, you know, update data and then off of that data trigger things. This can be just a direct process. A direct triggering mechanism. So it definitely opens up a lot more, I think, for companies with their own custom developers.

[00:12:27] Matt Tonkin: It allows you to do a lot more within your own your own platform. So I think this is a win for HubSpot.

[00:12:35] Joe Peters: Yeah. That’s the one thing that we keep on seeing from HubSpot is they’re not taking their foot off the grass at all in terms of innovations and new feature sets that they keep on adding to the platform.

[00:12:47] Matt Tonkin: I will say with HubSpot, and I maybe mentioned this after we went to Inbound, that sometimes they like to announce, make announcements that are a little underwhelming, but in volume. So I think that’s true with the 40. I think we, we found some three, three pretty good ones here. There were a few other interesting ones, but I think sometimes the announcements can be more for volume than quality.

[00:13:08] Joe Peters: That’s fine. That’s fine. We’re used to all these iOS updates telling us about, that we’re not going to use anyway. But there are edge cases and fringe cases that if this is important to you, you’re going to want to have a deeper dive into it. And so maybe what we could do is in, in the show notes, we can put a reference to that release so you can dive deeper if you really need to.

[00:13:35] Joe Peters: All right, let’s move into our community question and thanks to the marketingops. com community for today’s question. And what we have here is, is anyone using executable campaigns? We have not used them, but we have some use cases where they can be of value, like lead lifecycle management and lead scoring.

[00:13:59] Joe Peters: What is the best practice setup?

[00:14:02] Matt Tonkin: Yeah, and first off, this is a very Marketo centric question, but for anyone that doesn’t know executable campaigns were essentially, An update to Marketo’s request a campaign process where you can from another campaign or via API or whatever the case Request a campaign workflow to run.

[00:14:22] Matt Tonkin: The great thing about executable campaigns and why it’s a step up on that is They execute in series, or in, yeah, in series, not in parallel. And for anyone that doesn’t understand that picture, you know, running through a line item on a list and If you’re calling a campaign, whether it’s executable or requesting that starts to run.

[00:14:45] Matt Tonkin: So with an executable campaign, it needs to finish before the next step in the process starts. For a requestable campaign it just starts running and then whatever the next step in the process continues. What happens here is you have all these different processes happening at the same time and you kind of have to start throwing in wait steps and all these issues.

[00:15:06] Matt Tonkin: Executable campaigns solves that. So that’s where, you know, if you need to have something happen before something else, executable campaigns are your friend. The other big benefit is that they pull in token and data from the parent campaign. So whatever called it that program, you can use your token values from it and run it in the executable campaign, wherever it’s located, allows you to decentral or to centralize a lot of processes that way.

[00:15:34] Matt Tonkin: Don’t get carried away with these types of things just because you have the option. There’s always find the best solution. But yes, executable campaigns, that’s for, you know, processes you need to happen in a certain order and consistently. So that’s sort of your best practice setup.

[00:15:52] Joe Peters: All right.

[00:15:52] Joe Peters: Well, that’s, that’s a great question. And thanks, Matt, for clarifying that today. Alright, our next section and our new segment for those of you that maybe missed the last couple of segments is AI Navigators. And each week we’re going to take a little look at something that’s happening in terms of practical use and impact for our colleagues in MOPS as it pertains to AI and the influence and impact that it can have.

[00:16:25] Joe Peters: So this week we have. Six best practices for AI transparency and marketing. And this comes from HubSpot. So we want to give a little bit of a call out to them. We’ve been talking about AI principles and guidelines on the show the last couple of weeks. And last week HubSpot probably because they were listening to launch codes released a series of best practices for AI transparency themselves.

[00:16:51] Joe Peters: And there’s a couple of points here. AI transparency is the It’s the practice and principle of making AI systems understandable and interpretable to humans. And transparency in AI isn’t just about technology, it’s aligning AI goals with organizational value, ensuring stakeholder interests are met, and building a culture of openness and accountability.

[00:17:16] Joe Peters: And that’s a direct quote from the HubSpot article. And in the article, they outlined six best practices. I’m just gonna quickly run through the, the titles here of their six best practices, and then you and I can dive in, Matt. But first, they say define and align AI goals. Second, choose the right methods for transparency.

[00:17:38] Joe Peters: Third, prioritize transparency in AI lifecycle. Four, continuous monitoring and adaptation. Five, engage a spectrum of perspectives.

[00:17:54] Joe Peters: So there’s a lot here and what we, we love this pylon in terms of if there are more people talking about this, we recognize that it’s an essential time for organizations to start to think through things. So what is your first impressions of this map? Yeah, and

[00:18:13] Matt Tonkin: this might be a little nitpicky, but I think one of the things that stands out to me is I would have, you know, number six there, foster a transparent organizational culture around that.

[00:18:24] Matt Tonkin: I’d have that much higher in the list probably right after define your goals. I think that, you know, setting the tone for an organization and how everyone’s using it. Not, I don’t want to say secretive, but being conscious of being open with what you’re doing and how you’re doing it. Both that’s great
for like just learning within the organization.

[00:18:43] Matt Tonkin: Avoiding those conflict of interests. I think I think that needs to be higher up on this list for me

[00:18:49] Joe Peters: Or maybe it’s something that kind of spans all of them and the the the thing is Transparent organizational culture isn’t something that happens overnight You either kind of espouse that value or you don’t and this Making it number six.

[00:19:10] Joe Peters: I agree is kind of wonky When Maybe it could be one of those things that help lead to this, but being transparent culturally in terms of what’s happening with AI is really important today.

[00:19:25] Matt Tonkin: Yeah, definitely. What’s your take on defining and aligning AI goals? Because I feel like you have to be so broad when you’re aligning your goals.

[00:19:34] Matt Tonkin: And then there’s such for individual use cases, right? You almost need tiers of

[00:19:40] Joe Peters: aligning your goals. I think what becomes a no brainer, if there is a use case that aligns with what’s happening today, then, and what your mission is, well then that becomes a no brainer. Okay? But, I think when we start to look at the macro AI connecting to mission, And values.

[00:20:11] Joe Peters: I think the key PE piece there is the values and understanding the alignment between your organizational values. And, you know, we talked about this I think last week, where there are kind of, I, I, I think there are three types of ways of that organizations can look at AI use. One, it can be kind of open, do whatever you’d like.

[00:20:39] Joe Peters: Try and see what you can do. Let’s find ways to work better, more efficient, do cool stuff. So that’s one end of the spectrum. The other end of the spectrum is I want to approve every single use case before it’s executed or even attempted. And then there’s, there’s a huge span in between. And so your value alignment is going to be is going to be tied to your what your tolerance for risk is, and maybe that’s where we get into the challenges with Google with 100, 000 people.

[00:21:15] Joe Peters: Their tolerance for risk is maybe not the same as 700 people. But when we but when we look at this, I think, you know, I actually really strongly believe that there are organizations that there’s so much at stake that they shouldn’t just be They couldn’t risk allowing people just to have free reign and do what they want.

[00:21:40] Joe Peters: So I think there’s, there’s some connectivity there to what your, what your values are as an organization and what the risk benefit reward is. I think that’s another thing that we’re going to talk about in, in, in future weeks. But this idea of if You know, we’re going to have risks declining over time, and we’re going to have benefits increasing over time.

[00:22:05] Joe Peters: And if, when those benefits transverse that risk, that risk curve, and we, we have that tipping point where we move into the benefits greatly, starting to greatly exceed the risk, that’s going to be a whole new era for organizations because the use cases are going to be so impactful that you can’t ignore it.

[00:22:25] Joe Peters: But we’re just still curving up on that tipping point. What we’re seeing as the benefits and impacts that AI can have I helping you write an email isn’t enough Like that’s not it. But and I and I’m and I’m only being a little bit facetious and in that that It can do so much more than that, but that’s not gonna make you Transform your organization overnight.

[00:22:51] Joe Peters: It’s got to be some of Some greater impact, higher use cases,

[00:22:56] Matt Tonkin: right? If you were, if you’re in that high risk category, you don’t need to maybe be the front runner, be who’s doing all this new, amazing stuff with AI. You can sit back and see how other companies approach this you know, play it a bit safe.

[00:23:13] Matt Tonkin: And with less risk, yeah. Have some fun and you know, see what you can do

[00:23:19] Joe Peters: or what you. I think another smart way is you create a mini lab. If you’re a large organization, you have the resources to sign a few people to an AI lab where they can be presented with different use cases to kind of experiment and try around, but they actually understand the parameters around it and walls that they need to set up.

[00:23:41] Joe Peters: So nothing, nothing crazy or bad takes place. So anyway, I, I, I appreciate this. pile on that we’re seeing that these conversations are really important. The, the changes and transformations are coming and you need to start to think through this. And ultimately the more transparent you are with it, the greater I think the return is going to be because you’re gonna have a greater engagement with your, with your teams and employees on what this can do.

[00:24:19] Joe Peters: All right. Well, let’s move into our sponsorship. Thank you, period. So our thanks this week to our friends at Knack for sponsoring today’s episode. Knack is the no code platform that allows you to build campaigns in minutes. Get more engagement in less time with their simple but sophisticated email builder.

[00:24:39] Joe Peters: Visit knack. com to learn more. That’s K N A K. com. All right, now let’s move into our hot takes segment. And I like this next one in the theme of piling on. But we have IBM, Meta, and 50 other organizations launch an alliance to challenge dominant AI players. And this includes on the list Intel, Sony Group, Dell, Meta.

[00:25:10] Joe Peters: As well as some universities like Yale, Cornell, Dartmouth and and then stability AI is in there as well. And so this alliance is seen as a move to challenge the dominance of open AI, Microsoft, Google and Amazon in the AI space. And the group is working around a number of broad categories, including creation of common frameworks for evaluating the strength of AI algorithms, devotion of capital to AI research funds.

[00:25:38] Joe Peters: and collaboration on open source models. Matt, what’s your thoughts here? Yeah,

[00:25:45] Matt Tonkin: it’s a, for me, it kind of resonates with a sort of a net neutrality feel, right. The, the idea of we can’t just let. One or two huge companies control everything and how we do this. But at the same time there you know, those names on this organization list aren’t mom and pop shops.

[00:26:06] Matt Tonkin: They’re, they’re huge

[00:26:07] Joe Peters: organizations. What, what surprises me here, maybe with the exception of Meta, but the others have not made huge strides in AI, at least that I’m aware of. Maybe they, they have some secret things that are happening. You know, Intel’s on this list, but it’s not Nvidia on this list.

[00:26:28] Joe Peters: Mm-Hmm. IBM, Sony and Dell. You know, it’s interesting, I, I’m in Austin this week. I get off the, I airplane and walking through the airport to exit and there’re all these Dell AI ads. Oh. So I, I, I’d never seen that before. Yeah. Maybe they’re doing something, I’m not sure, but it was interesting just not being exposed.

[00:26:55] Joe Peters: I haven’t heard a lot of these organizations making big strides, at least in the generative AI space.

[00:27:07] Matt Tonkin: Yeah, it’s interesting. So, they’re doing it, but to your point, like, Is this, oh, we’re, we’re behind. So let’s team up and see if we can, you know, slow down the people ahead of us. Yeah.

[00:27:22] Joe Peters: Yeah. Well, I think the other part of this is having a multitude of perspectives in sort of flagging or calling out challenges that we have is really important because we’re going to be moving fast and we have to rely on these other parties.

[00:27:42] Joe Peters: Hey, look over here for a little bit because there may be something that we don’t like what the next few steps forward are going to mean.

[00:27:54] Joe Peters: All right, let’s move on to our second hot take. Over 2000 new Martech tools were introduced in the last six months. And this is by Scott Brinker, who I have a lot of time and respect for. He had a great presentation at Moffsapalooza early in November that I really, really loved. And he has this crazy map, right, map that kind of has, it, it ended up being, it started off as a logo for everything, I think in 2011.

[00:28:26] Joe Peters: And now I think it’s a pixel, colored pixel for everything because there’s so many Martech solutions in the ecosystem. And we have A number here at 13, 080 on his map. Wow. And that’s an 18. 5 percent jump in the last six months. And here’s an interesting quote. The truth is you can build an empire with all the Gen I AI that has been surfacing.

[00:28:59] Joe Peters: And by an empire, I mean, of course, a business. And that’s Franz Reersma of Martek tribe. So I think it’s pretty clear the generative AI is at the foundation or root of this. So what are, what are your thoughts on this?

[00:29:20] Matt Tonkin: It’s funny because, you know, to your point of that, when it was the logos and then it was a few hundred logos and then a few thousand I always remember the comments being, at some point this is going to plateau, there’s going to be consolidation.

[00:29:32] Matt Tonkin: And I feel like maybe at one point it didn’t plateau, but it sort of, it slowed a little and then boom, now it’s just so many. And as, you know, Consultant helping clients make decisions on specifically what products to choose. It’s a little terrifying because no one can understand 13, 000. You know, maybe we, maybe we could create an AI for that show.

[00:29:55] Matt Tonkin: But let’s, maybe we don’t, maybe we don’t broadcast that one. That’s our idea.

[00:30:00] Joe Peters: Well, I, I’m going to say if. If what I’ve seen happen is there’s innovations made off these platforms where there’s GPT 3. 5 or 4, and then there are going to be innovations that come out that are just going to kill companies overnight.

[00:30:19] Joe Peters: wHere, you know, the custom GPT, obviously there were a lot of companies that were building wrappers to kind of give you that kind of experience that a custom GPT was. Once that’s released, then that company is no longer viable. So I don’t, I don’t think that we’re going to have, like, I feel like it’s going to be a little bit of a bumpy ride in terms of the additions and subtractions.

[00:30:45] Joe Peters: I think there’ll be some pretty interesting data over the next year on that. And sure, we’re going to see more, but I think there’s, it’s going to be interesting to see what that delta is between the additions and the subtractions.

[00:31:00] Matt Tonkin: Hmm. And how many, how many additions are subtracted before we even get to a one year where he renews this, right?

[00:31:07] Matt Tonkin: Like, how fast is that turnover?

[00:31:09] Joe Peters: And let’s, let’s just also be honest, subtractions don’t happen overnight either, right? It’s not like someone, the, the next day that they saw a custom GPT, they’d build a solution around it. They just decide to close up shop and announce that. Usually it’s a slow and gradual decline, right?

[00:31:30] Joe Peters: What would be interesting if we could, you know, put on this map is the economic performance of each of these companies, right? And, and then we’d have some interesting stories on the ebbs and flows here, but I think it’s going to be pretty dynamic over the next year. All right. Let’s move into our final segment of the day.

[00:31:56] Joe Peters: Our pairing segment. So this week. We have one of it’s I can’t believe that I’m already here and that I’ve gone back to the well to my, one of my favorite bands for a new song, but the smile, which is a side project of a couple of Radiohead members, including Tom Yorke, they released a single called Wall of Eyes, That’s what we’re going to listen to today.

[00:32:28] Joe Peters: And I do think it really, it was a no brainer to pick them this week because I feel after seeing that Gemini video, the idea of a wall of eyes watching us all the time is really something for us to, to really think through. So it’s a, it’s a great track from a great band. Some blue vinyl. It’s on my Christmas wish list.

[00:32:56] Joe Peters: Hopefully it arrives in January when it’s when the, when it’s the album officially comes out, but great new music. And for those of you who are new to Launch Codes, you’re, you hear a little bit of the music at the beginning of the show, and then we give you a nice little segment at the end of the show so you can see, hear, and listen.

[00:33:18] Joe Peters: And Hop over to your favorite streaming platforms to listen a little deeper if you’d like, but that’s, that’s it for our music this week, Matt, what do you have in terms of our of a beverage this week or something different? I don’t know. You, I, this is a surprise for

[00:33:34] Matt Tonkin: me. So it is a beverage this week.

[00:33:37] Matt Tonkin: So what we’re going with this week, so it’s from a brewery called Flying Monkey and that’s in Barrie, Ontario. It’s called Psychedelic Puzzle Factory. So, yeah, their cans are amazing. And for anyone not watching this, I don’t even know how to describe it if you’re just listening, but It’s like a

[00:33:58] Joe Peters: colourful, colourful beer can acid trip.

[00:34:03] Matt Tonkin: And, and they have, so Flying Monkeys, this is sort of their, when, you know, the craft beer craze sort of kicked off. You had all these companies doing crazy cans, crazy names, pun names, whatever the case to really, I guess, stand out. And, and Flying Monkey, I feel like has kind of kept doing that even while a lot of other brands are sort of, you know, going back to a more traditional look.

[00:34:26] Matt Tonkin: And I think that just, you know, it’s great beer, but it just reminds me of how. How you want to stand out in, you know, 13, 000 new marking ops apps, right? It’s that Gimmicky maybe how do you how do you stand out and not to say it’s gimmicky for flying monkey It’s sort of their their thing now, and I think they did it better than the rest But that that’s why I chose this beer

[00:34:52] Joe Peters: Awesome.

[00:34:53] Joe Peters: Well, I I’ve actually had the chance to have a couple of their beers in the past and it’s if you have a If it ever comes across your way, you’re not going to go wrong with the flying monkey. Even if it’s just to embrace the art.

[00:35:07] Matt Tonkin: It’s great. There’s so many little details on the can. It’s I really do love the designs.

[00:35:12] Joe Peters: Yeah, it’s amazing. Alright, well thanks Matt, and thanks to everyone for listening. Be sure to subscribe, rate, and review. You can find us on Spotify, YouTube, and Apple. Stay connected with us on LinkedIn, or by joining our newsletter, also called LaunchCodes. using the link in the description. And as always, thanks mom for watching.

[00:35:33] Joe Peters: See you next week.