Agent Prototype
Configure a visual agent builder to query your data, building a first iteration of a live, interactive research tool.
00:00 Hi there, and welcome to Session 5 of the Funder Research AI Agent Pilot. We're going to be talking through, um, the agent prototyping in two visual builders this session.
00:14 We're, again, just to root ourselves in, uh, six-step sprint equivalent, um, over our, uh, except we're spreading this over, uh, six sessions over six weeks, um, and, uh, we are in the agent prototype step today in the build phase.
00:34 Our agenda for the day is, uh, to do a quick recap of the last session on vector database setup. We're going to look at the Session 3 Goals and Solutions Sketch to guide our prototyping, um, that was developed today.
00:48 In Session, uh, three, uh, and then we're going to build out, uh, develop an implementation guide and, uh, we're going to build out FlowWise and OpenAI's, uh, Visual Builders.
01:04 So in Session 4, what we covered was, um, uh, setting up a vector database in Pinecone, and we're going to use that for our FlowWise integration, uh, and, um, uh, prototype today.
01:19 Uh, in the studio, again, just to, uh, remind you, you can access these kits, uh, under the lesson. Uh, there's a summary video of the last session, and then you can, um, access the resources, uh, in the kit that were discussed in the last session.
01:45 For this session, uh, prototyping, our goal is to configure a visual agent builder, to query the data, um, and to build the first live iteration of the, uh, research tool.
01:58 We, in session three, we developed, agent goals and a, um, solution sketch. Uh, our goal is to return results that are demonstrably better than, uh, demonstrably better than a basic search of either, uh, a source spreadsheet, um, or public web tools.
02:20 And, uh, we're going to be prototyping the build, Peace. The agents today, and then validating and, um, uh, improving them in the pilot phase.
02:33 Solution sketch. Again, the key thing here is we're going to have a chat interface, um, where a user can put in a natural language prompt, um, we're going to have a source of truth, the government and open data that we've been configuring.
02:48 Um, starting in session one, and, uh, that will be, uh, referenced, um, by the model or the agent, um, in order to return, uh, to check on the facts and return more accurate, uh, results, return accurate results.
03:05 But we also want to check for more current, um, information, additional information, but, The open data that we're working with is from 2023, and although the foundations, uh, don't, uh, the status of foundations is, uh, that that's good enough for us to use, um, as a reference point, we want to check
03:29 against, uh, more recent current data on the live web. So, we will, um, do a check, uh, web search, and then, uh, we're going to be exporting, uh, a formatted brief.
03:43 Okay, so, um, in order to implement this pilot, um, I, in advance of this session, um, developed some implementation guides, um, to walk us through uh, and used, played around with different models, um, to, uh, um, AI models to help develop those.
04:11 So, I'll go, here you can see, we have the prompt that I landed on. Okay, of this week. It was just, uh, the prompt that I fed in to, uh, develop this, these implementation plans, tried it out in Google Gemini, um, the level of detail, uh, I wasn't, uh, as pleased with, and then I, so then I used Cloud
04:45 Sonnet and I was more, uh, pleased with those, through some back and forth, uh, and prompting, uh, you can see for Flowwise, this is the 12th version, but, um, I got it to a place where, um, I, that we can walk through today, um, and I'll make this available in the kit, um, uh, and then, as well, we
05:10 have in the left hand, uh, tab menu here, we have an open AI, um, implementation guide, as well, also developed, uh, using Cloud Sonic 4.5.
05:22 So, let's walk through, first, um, Flowwise. So this, what we're looking at on the left hand side here, just, uh, this is just a split screen in Chrome, um, so on the left hand side, we're looking at a Google Doc, which is the guide.
05:37 Um, for implementation, we're going to follow, and then on the right hand side, we're looking at Flowwise. Um, so this is our Flowwise canvas, we went over this, um, in, uh, Exploring Visual Agent Builders in session two, uh, if you, uh, want a refresher, you could, uh, could review that.
05:56 So let's talk through this, um, again, I have, uh, the setup so that we can go through this a little bit more quickly, you can slow this down, uh, if you want to follow along, but I would encourage you to access this implementation guide, um, in the kit, and, uh, and you could set this up in your own
06:20 instance of, uh, of Flowwise, uh, and we're actually going to use, uh, a couple of other tools here, uh, in Pinecone, our vector database, as well as for our web search, uh, we're going to use an integration, uh, with a tool called Tably.
06:37 You can see here, uh, there's some information in terms of, we're looking right now at a minimum viable architecture, so if you remember, I'll just reference it back, for the prototyping.
06:49 We want to look at this golden path, like the, absolutely the minimum requirements needed. We can always make this more complex, uh, down the road, and we may do that in the pilot phase, or, uhm, or you can explore, but we're going to be, I'm going to be saying a user here is going to visit, they're
07:09 going to search, uh, they're going to view, in this case, uhm, Bye. the results, uh, and, uhm, uh, and that's, this is the, that's the simplest, okay?
07:30 So, so quite a straightforward, uhm, application here, uh, and this data flow, this is essentially showing, uhm, that solution sketch, uh, that we had, uh, we developed last session.
07:36 Okay. So, we have a start node. So these nodes, you create nodes here, uhm, by default, you'll have a start node show up if you create a, uh, new agent flow workflow.
07:53 Here, I'm going to just, uh, exit out of here and show you here. So this is under agent flows, right, as a reminder in FlowWise agent flows.
08:01 You can open, uh, a new, create a new agent flow. Uhm, I'm going to reopen the one that I had here, uh, but that start node will appear.
08:15 And then just, you know, follow these, you can follow these instructions, uhm, but also I'd encourage you, uh, you could open, uhm, you know, your, uh, your agent model of choice, uh, Claude in this case, for example, and if you have questions as you go in, uh, understanding, uhm, each step, uh, you
08:34 can dig in further, there's good documentation for Floweyes, uh, and Pinecone and the other tools we're going to be using here.
08:40 So, I'd encourage you to make sure that you understand at each step, uhm, what, uh, uhm, the steps that you're taking and not just, uh, sort of, plug and play.
08:52 So, first of all, uhm, we're going to set up the start node and we've got a chat input here, we're going to enable memory, uhm, and, that's, that's all we need to do for the start node.
09:09 For the agent node, this is, this is where we're actually going to, Thank you. be doing most of our configuration.
09:16 The system prompt is key, uhm, this is where we're going to provide all the instructions, uhm, that, uh, are needed for our agent and we're actually going to build this out.
09:27 So, this is some initial instructions, then we're going to add some, uhm, as we go through this document, uh, add additional instructions, uh, or further instructions, okay?
09:36 Uhm, so you can see this on the left-hand side, what's highlighted is this text here, and then we have a little bit, another critical instruction above, and some additional information on verification as well as the response format below.
09:54 Okay, so this response format is the brief formatting. Okay, so that's the system prompt. We're choosing, ah, you need to add in your, uhm, here we're going to be, ah, adding our OpenAI API key, uhm, and so we went over that in the last session, how to do that, so, ah, you'll add that in here.
10:26 Uhm, we're choosing, uhm, the GPT-4, uhm, mini in this case, uhm, actually we might just choose, ah, GPT-4, the latest, ah, to get some, to get better results than the mini, uhm.
10:50 And, I'll just scroll, scroll through here. That's, as you can see, uhm, that's the way we've got it configured here.
11:02 Temperature, I want to reduce, uhm, the hallucinations here, uhm, in OpenAI we add this at 0.3.
11:14 So, let me just keep that, ah, the same across, so I'm going to update that to 0.3. Now we're going to configure the Pinecone Vector Store.
11:33 So, if we come down to our Knowledge Vector Embeddings. So. Provides instructions here. I'll just review the settings. We've got, again, our credentials, so this is the same OpenAI API key that we need to add in, uhm, Text Embedding Small, this is important because this is the same, if you remember,
11:54 if I open up Flowwise here, this is the, uhm, vector database we set up last time. That embedding model needs to be the same, that's critical.
12:05 This is, again, dimensions, pull it from here, 1536.
12:21 added in, uhm, the knowledge name, and description, drafts, uh, and we're going to turn on return source documents so that, uh, the agent, uhm, provides, uh, references, uh, for what it's, uh, uh, what it's querying.
12:48 Okay, so, if, uh, we were following through here, I already have this set up, so, uhm, I'm going to just test these streams together, but, uh, if you were testing, if you were setting this up the first time, then you could action.
12:59 The, uhm, exit out of here, click save, okay, and then you could test stream A, uhm, which stream A is referring back to the solution sketch, uh, and that is, uhm, would be testing before you have the web search hooked up, so it would just be testing with the vector database, uhm, and you should get,
13:21 uhm, accurate results. for that stream, uhm, there are some troubleshooting tips here, uhm, these are things that, uh, I came across in my setup, uhm, and so, uh, there may be others that come up, uhm, approach if you get other errors that aren't listed here, first thing I'd suggest is, uhm, uh, use
13:44 a, uhm, a tool like Claude, for example, uhm, to, uh, tell it or show it what you're seeing, uhm, and, uh, and it can help to, uh, uh, to troubleshoot.
13:56 Okay? Uhm, the second phase here, or second stream, uhm, is the web context, so the web search layer. So, go through that setup again, same, uh, same thing here, if we open the HTML, Agent, this is all done in the same tool.
14:13 Now this, in order to, uhm, to do this web search, uhm, we needed to hook up Flowwise to an external provider that can do the web search.
14:26 Okay, that's different than OpenAI, and actually, uhm, I didn't know that we were going to be, there were a few different tools, uh, and, uh, and so I haven't mentioned Tavli before, but, uhm, it was, uh, it's a recommended tool, there, you can get a free instance of it, uhm, I'll just show it to you
14:43 here, so app.tavli.com, you can sign up, you'll get your API key, uh, this is just using the free plan, and this will allow, uhm, the agent to, uh, search the live web.
14:57 Okay, so you get your API key here, and you can follow these instructions, so in the agent here, uhm, below the, uhm, the model, uh, section under tools, you'll click add a tool, and you'll search for Tavli, okay, I already have it connected above, so I won't connect it again, but that's how you do it
15:21 , uh, you'll need to put in your, your API credentials, that's what I just showed you in the other tab, grab that, once you create it, and then you're going to configure Tavli, uh, scroll down here, it gives the instructions here, uhm, and I'll scroll through just so that you can see what I have toggled
15:46 on, uhm, here, for year, because we have, uh, year selected, there, uhm, there's no option to select, uh, to leave it blank, so you can see here, year, uhm, there wasn't an option to leave it blank, uh, so we can just select year, and then you don't need to select days.
16:10 Thank us. Include answer, yeah, so, those are the settings, pretty straightforward, uhm, you can ignore this option B, uhm, I did leave in some additional instructions here, if you want to explore either option B or C, these are I'm Chris I'll see using different tools, uhm, to connect to if you don't
16:38 want to use Tavli, but, uh, Tavli is the one that, uh, I'd recommend. And then we have, uhm, we want to, as it says here, uhm, replace or update our system prompt with this information, uhm, and so, as I've shown you before, the verification process, we've added in there, so if we take a look here at
17:04 the system prompt, verification process has been added in. let's actually do a review. quickly, so we have StartNode, which is a text input, it's a chat interface, that's going to be what prompts the agent, okay, and then in FlowWise, all of the, it's actually quite straight forward with all of the,
17:42 uhm, inputs to the agent, ah, being in one place, one node in FlowWise, so we connected our model, that was ChatGPT 4.0 model, we have our system prompt, we have this critical instruction here, the, uhm, it wasn't mentioned in the instructions, ah, it's just saying, I can, you know what, I'll just add
18:17 this in to documentation, it's just saying that, ah, we want to search stream A, so that's the foundation dataset, before we go and, ah, do a web-based search.
18:47 Or, to restrict the agent, ah, from just, ah, making up hallucinating, so this instruction is meant to reduce hallucinations and make sure we're checking stream A.
18:56 Uhm, We also have a tool set up. This is our web-based search, and then we have an, our vector database, knowledge source, hooked up.
19:07 And so, again, can't hurt to look back here at our solution sketch. Pull up this chat window, that's the start, how we start the, ah, prompt.
19:18 We've got the vector database, Tavli search, and the model, and then we're going to be generating the brief. And, ah, and we're going to just test this out.
19:28 Um, actually, no, there's one more piece here that I have set up. We're going to test in a moment, but the brief, so the system prompt, so update the agent system prompt with this, um, brief.
19:44 Oh, sorry. Come down here, here we go. Output formatting and synthesis. So there was one more piece to the solution sketch was formatting a brief, and I actually have that already in there too.
19:56 So response format, so all of this that's available under the response format in the system prompt, um, generates the brief.
20:06 So you can see that the response should be formatted. We wanted to get a sense of, this comes back to our goals, understanding the rationale for matching, the sort of financial snapshot of the foundation that is suggested, geographic focus, um, contact information, and additional context.
20:29 Okay, so that's, you can work with these, um, in ranking rules for how it's prioritized. You can work with these, these are things you can edit, um, uh, for your specific, uh, applications.
20:41 Okay, so if we come back here, now, I'm just gonna scroll down, now we can, there we go, test the complete flow of format.
20:49 Okay, so, let's do a search for Foundations of Manitoba Funding Arts Education. going to test.
21:07 So, you can see it's checking the foundation database, then it's going to go and check the web, the Tavoli results.
21:14 It's plugging away, it's gonna take a and then it's going to return the top five results based on that criteria.
21:29 Those criteria. Alright, so let's take a look here. So, uhm, that's interesting.
21:52 Okay, we've got Manitoba Craft Council. Interesting, because that wasn't super, helpful.
22:06 Let's Organization, Supporting Community, Indigenous Youth. So, I think Arts Education, again, that's probably, it was, uh, uh, may have been the wording, uh, whatnot.
22:24 So, let's, let's see here. Organization, Supporting Indigenous Let's just say, Indigenous, uh, Causes, Indigenous Reconciliation, and I should have said NBC, but hopefully that'll fix Canadian Foundation Database, searching, searching Tavli results.
23:09 We're exploring this together. Okay, so it looks like, So let's, let's review what the results were.
23:37 Indigenous Reconciliation Fund, um, with $1.5 million total grants in 2023. That's pretty promising. Let's take a look at, uh, It says it's active.
23:57 Okay, Catholic Commitment to Healing Reconciliation, interesting. Current priorities include fulfilling a $30,000,000 commitment over 5 years, contact your affiliated, uh, diocese, okay?
24:28 And then, It provides how it was verified. Okay, so that's, uh, that's an interesting result. Um, again, we'll do the same search in, um, the agent builder to see how they differ, and then in the pilot phase, in the next phase, we'll actually go and, uh, compare these two, uh, as we said, in terms of
24:52 assessing, uh, the quality of the results, um, with a general search using, uh, Chachapiti or Claude, just use the same prompt or a web search, um, to see how they stack up.
25:06 Um, okay, so I'm not going to go through all of these, but I clicked through everything, but just to take a look, we've got a Construction Foundation Society was matched because it has, um, it supports Indigenous communities through trades training and employment pathways.
25:25 Okay, this Fraser Headwaters line is interesting. It's not a lot of money. So, we may actually, um, want to tweak our prompts to make sure that there's over a threshold in terms of total grants issued.
25:43 So this is the kind of thing our prototype is functioning here, but we need to clearly make some tweaks to improve the results.
25:52 Ending Violence Association, British Columbia. This is, ah, eight million dollars in grants. Significant, uhm, Dogwood Heritage. Okay, so, I would say from a first glance, ah, I'm glad it's returning what seemed to be active grants, ah, we're not getting, you know, a bunch of dead links, uhm, and we
26:20 have some improvements to make in terms of, ah, make sure we don't get, ah, funders return that are not, ah, uhm, not making significant contributions.
26:31 However, or, ah, not making significant, ah, grant allocations annually, uhm, and, uhm, I would say, though, that perhaps, you know, I will, I will click through and, uhm, we'll take a look because this is 2023 data, so, maybe through the web search, found that, ah, it's actually, uhm, got a new program
26:51 or it is active, but, ah, but since it checks the foundation dataset first, I'd be surprised if that were the Okay, so, Worth clicking around, checking it out, uhm, try your own, ah, tests, uhm, and let's move on to the OpenAI Visual Builder, uhm, in the interest of time, ah, and then we'll, again, continue
27:10 to improve the system prompts and, uhm, ah, verify these results against, ah, the general web search in the pilot phase.
27:25 Okay, so if I open, uhm, let me Similar, uhm, different setup, and actually, uh, interestingly here, so OpenAI, when we got into it, uhm, after the last session, uh, using Pinecone, uhm, is more complicated with OpenAI because we would actually have to use a model.
28:37 protocol to, uh, the model context protocol, MCP node, in order to connect OpenAI to, uhm, Pinecone, the vector database, and that actually, uh, is a bit, it's not as low-code as we're looking for, uhm, uh, in this program.
28:59 so actually, we've got a different approach here, and I think, uhm, that alone, the fact that we're having to use a bit more code in this, uh, case in order to get the visual builder up and running, and I'll, I'll talk you through, and I actually, I do have this set up here, so it's functioning, but,
29:19 uh, as we've talked about before, from the prototype phase for piloting, we're really going to go with one of these two visual builders, so, uhm, I, I'm going to show you this OpenAI setup, but then, for the actual pilot, I'm going to suggest we proceed with the FlowWise AI visual builder, uhm, because
29:40 , uh, of its ability to integrate, integrate more seamlessly, uh, and straight forward, uh, in a more straight forward manner with, uh, with Pinecone.
29:48 Okay. So, uhm, so, first of all, actually, uhm, since we're not using Pinecone, the more straight forward, uh, approach here in OpenAI is actually to directly upload, under storage here, to directly, directly upload the data to OpenAI.
30:13 However, you can't just, to upload a CSV file, you have to upload it as, uhm, either a text or a JSON, it has a few different file types it accepts, uhm, and so, in order to create that JSON file, uhm, there is, and this is where there's, uhm, additional, uh, there's some additional steps required, uh
30:38 , there is in, there are instructions here on how to do this in Google Colab. Okay, so, essentially, we've used Google Colab in past, uh, sessions, so, you can follow these instructions if you would like.
30:52 The gist here is what these instructions allow you to do is these 32,000 records that we had uploaded to Pinecone Vector Database, we need to essentially download those, those as a JSON file, and you can't do that directly through Pinecone, so what you do is you're gonna, we've already had, had Google
31:14 Colab connected to Pinecone in order to upload this data. We're then going to download it, the data using, uhm, Google Colab as well, and we're gonna download it as a JSON file.
31:25 Okay, so, uhm, because we're not proceeding with OpenAI's Visual Builder to the pilot phase, uhm, I'm gonna just let you know the, those instructions on how to do that are provided in this document, this implementation guide, not gonna go through step by step how to do that, there are several steps,
31:43 uhm, if you're just looking to prototype one, I suggest using the Flow Eyes that we just went through, uhm, but I'll continue, I'll, I'll walk you through this just so that you can see how it's set up, and then I do want to compare the results, uh, uh, of the two, uhm, prototypes, uh, together.
32:01 Uh, so, uhm, I've already set this up, again, uh, you can see here I've uploaded this foundation's JSON file, It was like 24 megabytes or something like that, 27 megabytes, uhm, and so, it's, uh, it's been uploaded to this vector store.
32:26 So, what we've got here, this is a vector database just like we had in Pinecone, so in this case we're not using Pinecone.
32:33 Okay, so, we downloaded the data and added it here to simplify things. And we're going to use the native, uhm, OpenAI vector, uh, database, and the ID for this vector database is, uhm, is here, okay, so that's important, we're going to reference that in a moment, you can copy it, uh, and we're going
32:55 to go now to the instructions, and we're actually We're going to now, we've done all these steps and as I said.
33:02 this last step is copy the vector store ID which we just mentioned, now we're going to actually build the workflow, build the agent flow, uhm, this is going to, this looks very similar, uhm, to what we have in, uhm, FlowWise, but I added one step here, uhm, which I thought in terms of guardrails, but
33:29 , uh, we're going to start here, so text, uh, is an input, same thing as in FlowWise, uh, for the MVP we're not going to, uhm, complicate this further, so, uh, we'll leave the, uhm, variables blank, so that's our start, again, similarly, if we want, we're going to see this preview, that's, uh, that's
33:53 how we're going to prompt the agent, here, for testing, I'm going to skip over guardrails, we'll come back to that in a second, same thing, this node is the agent node, I'm going to scroll through here, configuring the main agent.
34:14 So we have our instructions, same thing, system prompt, same as in, uhm, Flowwise, and you can see, got similar instructions, formatting instructions, and that's what's pasted into into this, into these instructions, okay?
34:38 So the system prompt, selecting the model, we use GPT-4, so we're comparing apples to apples, then we've got, same as in Flowwise, two streams, we've got the Canadian Foundation data, and we've got the web search.
35:02 In the Canadian Foundation data, we had a critical instruction, if you recall. Let's just go up to our system prompt Now we've got, again, Temperature, remember, for Flowwise, I changed it from 0.9 to 0.3, again, to keep the responses, ah, more factual, less on the creative side.
36:15 We want more accuracy, ah, so we want more factuality, consistency, so the lower temperature, ah, in that case, uhm, and, you can see that that's set here under Temperature, uhm, you Pay based on tokens, uhm, so you can set a number that can be used for a given query to control the cost.
36:57 And I don't think I have anything else to say on that other than that, ah, oh, and including in chat history, so you want to keep the, ah, the chat history enabled.
37:01 Okay. I'm going to click out of here. For the, in this case, scroll down here, adding the web search tool, yeah, I didn't, ah, didn't go over that.
37:18 The web search tool. So, let's just, review this. So we've got a web search tool. We've got tool here, uhm, that I've already added.
37:31 So, actually, let's just add it again. So the Canadian Foundation, let's do them both. Canadian Foundation, so that's a file search.
37:39 I'm going to add, attach existing vector store, and then choose your vector store. Okay. And then, I'm going to add a web search.
37:49 Thank you And in this case, this is just the, uhm, unlike, we don't need to use Tavoli, so Flowwise needed to connect to an external provider, Tavoli, to, uhm, search the web, but OpenAI has, uh, web search built in, we can use its standard web search.
38:08 I'm not going to, like, set country and all that, because we already have, uhm, in our system prompt and in the foundation dataset we're using, uhm, it's going to be searching Canadian foundations, so, uhm, I'm not going to restrict that further, and there may be sources that are outside Canada that
38:27 we want referenced, uhm, that relate to Canadian foundations, so, uhm, I don't want to restrict it there unnecessarily. Okay, and we've got our two tools and then the system prompts.
38:39 They're set up to make sure that those tools are used, uh, accordingly. One enhancement here, so this is what I, I think this is neat, uhm, to consider a guardrail, right, so, if you were using this, uhm, within, if it's internal use, it's maybe a little bit more, less relevant, but, uh, and maybe not
39:05 critical for this particular use case, but I just wanted to, to give it as an example of an enhancement. You could put guardrails in place, uhm, to prevent people from adding personally identifiable information, uhm, into the search, uh, adding, uh, so for example, somebody had an email of a, uhm, for
39:23 a foundation contact that shouldn't be added into, uhm, moderation, jailbreak, so this is, you know, in terms of, you can get an explanation here, so, blocks harmful content, uh, detects malicious use, right?
39:44 So those are all good things to prevent, uhm, and you can review the rest of the guardrails here if you'd like.
39:49 Uh, again, this is an example. A critical thing in this, if you're just using this internally, uh, and it's not, for a tool like this, you would want to be careful because you do pay per search, uhm, it's a small amount, but, uhm, but still, uh, you know, if you put this out publicly, uh, and it got
40:09 used, uh, significantly, you would, uh, you would end up paying Uhm, okay, so that's, uh, that's the guardrails. Again, there's some additional enhancements here, uhm, approvals, et cetera, uhm, but I'm going to move along to testing this out.
40:25 I'm going to use the same prompt that we used in FlowWise. Here we can test, preview.
40:50 Organization supporting Indigenous reconciliation in DC. While that's going, I'm just gonna put these side by side.
41:17 5% match.
41:54 Living Lakes Society was the second. Don't think we had Living Lakes Society, um, which one is, No, we didn't have Living Lakes Society, so that's interesting.
42:08 Uh, looks like not a huge number of grants. Bye. But, better than the, actually, Canadian Foundation, or Construction Foundation of BC.
42:23 Sisters of St. Anne. So we need to look into, uhm, again, why, maybe in the system prompt, I don't think we restricted it, uh, differently, uhm, to the, for the number of, uh, of sources.
42:56 Uhm, but, again, uh, it shows you, okay, we're working, it is checking, it didn't do provide web sources here, so I'd, I'd like that less, uhm, than the results from FlowWise.
43:15 It's nice to have, check and provide the links, uhm, again, check the system prompt. But, at least it gives you a sense of, uh, the sort of minimum viable approach to, prototype set up here, uhm, got a bunch of work still to do on these, uhm, prototypes, uh, and we'll be diving into that in the pilot
43:38 session, uhm, coming up, uh, and, uh, yeah, look forward to seeing you there. That's going to take place on Wednesday, January 21st at, 15 p.m.
Up next


