Video: Rapid7 Global Partner Webinar: Operationalizing for Success Part 1 (Part 6a of 6) | Duration: 3612s | Summary: Rapid7 Global Partner Webinar: Operationalizing for Success Part 1 (Part 6a of 6) | Chapters: Operationalizing for Success (28.550001s), Initial Setup Process (640.155s), Installing Connectors (1274.0651s), Asset Visibility Strategy (1580.9501s), Use Case Implementation (1757.19s), Process Map Visualization (2293.35s), Customer Journey Roadmap (2509.7798s), Partner Resources Summary (3027.645s)
Transcript for "Rapid7 Global Partner Webinar: Operationalizing for Success Part 1 (Part 6a of 6)": Hi, and welcome to the, sixth part of the exposure management, webinar series. This is part one of operationalizing for success. So in this, we're gonna teach you, some of the things that you should be thinking about when you are, just after you've deployed Surface Command within a customer environment and how to help that customer get the best value out of the product. David, I think we we also talked about the fact that part one would be sort of more what's the process you should go through to understand how customers can realize value. And then in part two, I'll be covering more about how you actually implement that in Surface Command. So I think they work really well in terms of having a practical example in in this session and a process, and then I'll show you, you know, how else you can implement that in Surface Command. Yeah. Absolutely. So, yeah, it's kind of like theory and then practical. Theory is part one, practical is part two. The reason we do it this way is because to get the the best out of Surface Command dashboards, you need to understand that, people, processes, technology. 100%. Yeah. And so going through, like, the theoretical scenarios we're about to go through, mapping out your processes, where the technology interacts, where the people interact. You can then see what technical metrics you might need to measure in order to detect certain things with your processes, etcetera. So, yeah, I I guess we better introduce ourselves. Hopefully, by now everyone's familiar with my face, and probably yours as well. But, just in case, my name is David Higgs. I am the senior, solution engineer for the channel in EMEA, and I'll hand it over to you, Chris. Oh, so if you could introduce me as well. Yeah. Chris Neely, director of sales, engineering. I came over from the Noetic acquisition, last July. Wow. That's been a long time. And that's the technology that now fuels surface commands. So, yeah, looking forward to taking you through this. David, what what do what do you wanna cover in today's session? What are your objectives? And the reason I asked that is because, and that's why I can see one of your points here. I think surface command as a technology really lends itself to our partners and their ability to build services around it. It's not and whilst it's hugely powerful, it's not a technology that instantly tells customers what they need to do because every environment is different. So I think it really lends itself to our to our partners. And I I also think the extensibility of, Cypher as well. The the kind of limitless potential you have to be able to create metadata from the existing data, and kinda tailor the dashboards in in a really granular way, which allows, yeah, partners to create these really bespoke, dashboards that, like, align completely with the customer's security goals. Yeah. 100%. So, yeah, in this session, we are hopefully gonna, show you the way on, what pro services you could create for your end customers, how to package that up, and then, yeah, we'll talk about how you might go about that, give you some, frameworks to think about. So the whole kind of premise of this kind of presentation is around, wanting to go through that, stages of maturity. So even in something as mature as the vulnerability management, as a vulnerability management program, you get various customer maturities within that. So we look at these, three different, environments here, and the dark line that you see is the number of assets, and the blue line that you see is the risk within the infrastructure. And what this highlights is kinda difference in in processes. So we see the the top one here. We have the risk line almost tracks pretty much identical to the number of assets within the infrastructure. And what this kinda tells to me is actually there's there's not really any patching going on. We don't really see the risk line move too independently of the assets line. The second one here, we see, the risk line climbs up, and then you can see that there's a load of patching. And then the risk line climbs up again, and there's a bit of patching, and then it climbs up. And then, again, we have a load of patching, within the environment. And then the third one we see here is a, like, a little wavy line when it comes to the risk line. And what this indicates is regular patch cycles. So in this instance, I can see that this is a more mature customer, just from looking at this one graphic because they are they have a regular patch cadence as you can see by the the bumpy risk line. And so we we have the same tool used across three different environments to different levels of success. So it it goes to show that you can install the tool, but unless you operationalize it, the level of success will, will vary. Interesting. Yeah. And as you said, the title, Ed, it's about the the maturity. Right? Absolutely. So what I wanna do now is a conceptual level. We're gonna try and understand a process about how identities move in and move out of an infrastructure. So in in this instance, we would we would start here on the left. We have, HR. They would initiate a request for a a new employee or for an employee leaving. That would then go into a a backlog of work for IT operations to pick up, and then we have, the that team's bandwidth to then go and complete the work. And that's based on the number of employees, number of tools that you have that that can do various parts of of that task for you. So IT ops will pull from that backlog, and then they will, of course, configure an identity or remove the identity from the infrastructure. And here, we end up with, think of it like a a stockpile of of identities. So you get configured, they move in, they get, removed, they leave the the infrastructure. Now within that, we see, a lot of identities that aren't set up correctly. And this can be a number of factors. It can be that they might have been set up to begin with correctly, but then various changes happen or users get the ability to make modifications. And we end up with some identities that don't have multifactor enabled. And this this may even have been done by by accident, during the configuration stage. And so, linking back to some of the threat information from the from the last presentation, where we would see three kinda key metrics across threat actors, and they were exploiting, misconfigured identities, exploits on, public facing systems, vulnerable systems, and then phishing as well, with identities comprising of roundabout 40% of all initial access. So this is this is something we obviously wanna get visibility of. Now it doesn't just start with that, because, obviously, we we need visibility. We we need surface command to help us gain that visibility across all of those different tools. So So we can look at this from a from a huge macro level. But even then, once we've once we've found all of those identities, and we've remediated them, it poses the question of this identity didn't just appear and have misconfigured multifactor. How how did that happen in the first place? What were all of the other things that happened in the background that led us to this point that we're at now where we have infrastructure with a series of of misconfigured identities. Yeah. David, I I really like this. And what I also like about it is often when I'm doing, you know, POCs or deployments with service command, and I say to customers, you know, what are your use cases? They say, oh, we want visibility of identity. And and that it doesn't stop there. You have to qualify what they actually mean by that. What do they need to get out of it? And by virtue of this slide, what's the process? How are they doing it today in order to work out exactly what we need to report on? So I think this is great in terms of helping understand that you need to understand their existing process, how they're doing it today, and what they truly need to report on. Yeah. Absolutely. And so this is a conceptual map of a process, but, actually, talking to your end customer and then saying, how do identities come into your infrastructure? Who are the people involved? What are the tools that are involved? And then how do they leave your infrastructure? And then so if you do discover some identities that have misconfigured multifactor, you can then come back to this this diagram and go, okay. Well, which is the most likely point that, that could have been configured? Which is the most likely point at which we can catch this as early as possible? And so we have these additional steps here where, let's say, here we whether you have the quality control check, we have surface command looking at, you know, several different tools here, looking at those identities across the infrastructure, identifying a discrepancy, and then feeding that discrepancy. Ideally, you could create, like, an automation workflow that would feed that back into the operations backlog via a ticketing system integration, for example. But then on top of that, like we've just discussed, you can do the remediation work, but that doesn't stop these misconfigurations happening to begin with. So we then have the feedback and review process. That feeds back into our quality requirements, from a compliance, security, operations, and HR point of view, which then allows the security team to then go out and update that quality control check. And you may even have those those quality control con requirements oh, I've changed my page here. Going back into this configuration stage for IT operations. So this would be stuff like hardening that you would do across your infrastructure or, let's let's use a different example of of of assets. You might have, you know, golden images that are used instead. That's that's kind of a asset equivalent, process. So with that in mind, shall we move on to the next slide? Should we just cover up I like the fact you're setting the scene here. Should we also cover up what you get out of the box with Surface Command just so we understand where we're starting from? Let's go for it. Yeah. So, default configuration of surface command, I mean, whilst it's default, you do get some content out of the box. So, there are a number of connectors that will pre be preinstalled. Now the reason I'm talking about these and the value of these realistically is we will ingest from any existing Rapid7 technology that your customers have. So if it's IBM or ICS or the Insight agent, this information will be automatically ingested and present within surface command. So you'll automatically start getting visibility of the scope of their environment, potentially where they don't have the insight agent running or, accessing vulnerability scan. So that's automatic. You don't have to worry about those as connectors. And in terms of dashboards, we'll automatically give you start giving you an overview of the data, the size, the different data sources, the number of assets, the breakdown of assets. Controls overview gives you an understanding of potentially where, you know, those key controls, are missing. So that'd be things like endpoint protection not running, vulnerability scanning, and, obviously, as David mentioned, you know, a lack of things like MFA. And then we also have our external attack surface dashboards. We have an overview and an insights. These will probably be blank to start with, but customers and yourselves can easily add in seed data into that in order for us to provide some discovery information about that external estate. So you already get quite a lot out of the box, and you don't have to do any configuration for that. Apart from adding that seed information in, you know, you'll have visibility of, you know, their existing rapid seven technology. And that, the data mesh is kind of like the the USP here, isn't it? It's that's, Yeah. It's one of those things I I I sometimes talk about, David, and sometimes not. It's kind of sometimes it's a bit of a so what. But effectively and and thanks for keeping me honest there, David. The data mesh is our ability to share information across the entire Rapid7 portfolio. It really enables us with our new strategy to unify security across the organization. And so our our command platform has full visibility of all our products, has visibility of risk across everything, you know, compliance, etcetera. So the data mesh really is our ability to share information across all our portfolio. Thanks for bringing that up. No problem. Should we go into the Yeah. Initial setup now? Yeah. Let's do that. So, as I mentioned before, from an initial setup point of view, the first thing you really wanna do is add that seed information. And, honestly, for me, from a POC point of view, the first thing I do is I put in a customer's domain. It's pretty easy to identify. Everyone has one. You know, you don't need to query them and ask them what their IPs are and their ASMs and their IP ranges. It may take them a while to get that. Just putting their domain in will identify, subdomains, will pull out, obviously, other IPs because that's what the the ASM service is designed to do. So just put in their domain name, and you'll start having information in that dashboard. And then on top of that, there are some default connectors I personally would recommend you install. You don't have to have them, but I recommend in order to gain the greatest amount of value further down the line installed in them. So the first one is the MITRE ATT and CK framework. This enables us to layer MITRE on on our, solution and will allow allow you to understand where security mitigations are present, but more importantly, where they're not. So you can actually then query the platform and say, tell me assets that don't have antivirus or don't have, endpoint encryption or don't have vulnerability scanning. Who cares about the underlying technology? You don't need it when you have MITRE. You can just say, tell me assets that are missing a security mitigation. And then the next four are really third party threat intelligence. So Caesar everyone knows no Caesar's known exploited vulnerability list. First is the form of incident responders exploit predictability scoring system. They are tracking vulnerabilities and they measure themselves whether, vulnerabilities are being seen in the wild and whether they're being exploited. And then everyone should be familiar with NIST MBD. And I like this because you can bring in version three metrics on vulnerabilities. So those three are the third party threat intelligence. The combined vulnerability management connects, it really is a more efficient way of running, FIRST and NIST. That FIRST and NIST are two very large data sources. You're comparing those to all of your vulnerabilities, which is once again probably one of the largest data sources you'll be ingesting, and that can take a lot of time. So the combined vulnerability management connector runs them together in a cached format, and they just run a lot faster. You can only install combined once you've installed CSER first and missed. So get those installed first, install combined. And then there's all you have to do is schedule CSER and combined to run. The others, you can just leave alone. Combined will manage the third party threat intelligence, and you can run Seesaw, on its own. And you can schedule those. I typically do them nightly. And then after that, you can start installing customer specific technologies, and you can do that from our extensions library. And I think we're up to about 155, 56 connectors now to choose from. So you'll understand your customer environments, what tenant what's that security and technology stack, and what you need to start installing. So I I guess from a customer perspective, the the kind of benefit that in theory they should see out of the the initial setup stuff we've gone through here, The external attack surface stuff, that uses Rapid7 Labs in the background. So we have we have a global honeypot network. It's approximately a 150 honeypots across, I think it was five continents. We have and that those honey pots are looking at, attacker data. So we're not just looking at, scan data because that that can be quite noisy. We actually look at exploitation data on those honeypots, so getting more high fidelity results. We also look at, so we we have a, Internet wide scanner as well. It's called, Project Sonar. So this is essentially what's powering the the EASM. So it scans the, the Internet biweekly. So we will discover external attack surface data on your end customer's assets. We then enrich that with information as well, from Rapid7 Labs, from also the showdown project. So if you've got any, CVEs noted against that, we'll be highlighting them in the external attack surface data. And then these other third party feeds just helps give the customer, verification, I guess, and and peace of mind, when Rapid7 are providing our our risk scoring in conjunction with CVSS version two and version three, the active, risk score from Rapid7. And then, of course, you can cross verify that with the likes of, First and NIST. We obviously use CISA KEV in our, active risk score anyway as well as as one of the key metrics. But, yeah, I I guess from a customer perspective, it provides the added layer of, cross verification. Yeah. And I've done I've I've met a customer yet who all measure in their mind, there are the exploitability and criticality of a vulnerability in the same way. Right? I mean, everyone likes to use that risk score. Why wouldn't you? However, you know, there are some teams, departments, or companies that believe in first or, you know, wanna focus on some of the NIST metrics. So it's handy to have them there whether you use them or not. Yeah. I I guess as a partner, it gives you that flexibility no matter what strategy your customer is following, you can you can make it fit for them. Yeah. Exactly, David. Exactly. I'm just talking about, obviously, once we've set those up installing connectors. Should we just, go into that a little more detail? So, in order to install a connector in service command, it is fully self-service. Within service command, you can go to the extensions library. I'm sure most people are watching this video will be familiar with the extensions library. And here we can see it, focused go back a slide. Sorry about that. No worries. If we can get yeah. Under the this is a focus for surface command. So this is a very old screenshot because it says a hundred and twenty five and I said we're up to about a 154, 57. And you can search for the technology you want. Very simple to install the connector. Now here's the tricky part, and it's probably not for the partners. It'll be on the customers. Read the documentation. How many times have everyone said that? Each connector has very detailed instructions on how to create the correct correct credential for that technology. So typically, they read only, but it'll tell you how to set it up, whether it's an API key, username, password, whatever it may be, what the roles are, what the scope is. You know, if you follow those instructions, the credential will work and will get access. The number of times I have customers say to me, something's not working. And I said, it'll be the credential. No. I've set it up completely correctly. And in the worst case scenarios, I've got them to recreate a new connect with me watching, and it's worked. So if they follow the instructions, the credentials will work. If they don't, you're likely to have issues with connection. Just saying. So and it can be a bit frustrating, but just know if they follow those, you know, detailed instructions, They'll be fine. And then Shall I should I grab the extensions page just to so we can show everyone where that is? Talking. That'd be great, actually. So, yeah, here we can see it. Just click on any of those, David. It doesn't matter which one you wanna click on. And then if you go under so you get a understanding of what they are. And then under, yeah, documentation, great. It tells you how to set up, the AWS core credential. What it needs, all the different roles that it needs to access the different resources there. So very detailed, as I said before, if they follow that, then they'll be good. When you're creating or when you put the credential in, there's an option to test the credentials. This is a great option because it will tell you whether the credential is valid, whether it can gain access. And often, it will tell you, it'll tell you what's missing in that credential in order for us to have full access to be able to ingest all the different resources we need. So, yeah, you click on anyone really, I guess, and just do it. And then you'll be able to see the test button as well, under cog. That's right. Yeah. And then you'll be able to see a test button. Now this is probably a test instance, so it probably won't work and there were no credentials there, but you can actually test the credentials if you need to when you go through that. There you go. Test connection. So if that's worth doing, it's it groups gives you useful information about what's missing. Just another note, I don't know if we cover it here. It's probably covered in previous videos. But typically, from a connectors, where do you start? What do you ingest? Well, all of your customers will have some form of endpoint protection. This is a high fidelity great data source. Ingest it. So if it's a Defender or a Sentinel one or whatever it may be, let's get that ingested. Obviously, we'll have all the Rapid7 technology, so it's likely to be IBM. But if they're using another technology, vulnerability management has great breadth of data. Sometimes the fidelity is not great, could be an unauthenticated scan, but it's got great breadth. So you wanna bring that in. Identity is also really important. Back to David Higgs's point, you know, things like, identity MFA. So something like an Entra, Azure AD Entra, I think is a fantastic source. It's got really good information, not only on users, but on assets as well. So I would start with just those three data sources. If they're using Intune, that's another great data source. Understand what the security networking stack is and bring it in. Do not look at ask them what they're using, and they'll give you a huge list and go right, we're gonna bring the whole lot in. There's no point in that. Let's just start small, understand what their use cases are. The use cases fuel the data sources. And actually, I covered that in the previous video in this set in this series on what connectors you should be focusing on. Brilliant. Thank you, David. Do you wanna, should we move on with actually, you know, understanding how you create that service, David? Let's go for it. And, actually, the this ties in nicely. So we we bring back, this little slide that I've been using in a few presentations. We spoke about it earlier. Those initial access vectors revolving around multifactor, around vulnerability, exploitation, and social engineering. And what's interesting, David, also, the multifactor number from I think it was 2023 or 2024. Oh, sorry. It's 2023, went up in 2024. This may be twenty four's number. But it it's still continuing to grow that remote access, no MFA, at at access vector. Right? Yes. I'm trying so to reference a third party report, the Verizon, DBIR report, I believe vulnerability exploitation and remote, so multifactor remote access are now on par with each other. So Wow. Yeah. Over the course of 2024, we've actually seen a surge in in vulnerability exploitation, which is, yeah, very interesting. But Yeah. That's right. I I guess coming back to the platform itself, and who whoever in, our dev team thought to kind of align the platform with with the stats like this is is just pure genius, in my opinion. But we are essentially surfacing on that home screen. Let's in fact, let's go back to, the platform here. We're surfacing on the home screen. I always like to talk to customers about this. Let's just go back to the command platform. You see those, three key stats here. So you're they're the biggest ones. Right? It's it's yeah. Yeah. Absolutely. So with those in mind and those examples you've you've kinda just given, so, we wanna have visibility of assets. We wanna have breadth of coverage and and depth as well. So the combination of your EDR and your vulnerability management platform, as well as anything else that essentially scans the network or audits the network is really good to have in terms of asset visibility. Having identity in there as well, so making sure you've got active directory connected, and anything else that's gonna discover identities across the infrastructure is gonna give you those that breadth of coverage. Obviously, other identity services like Okta as well, they're great to have in there. Yeah. Definitely. Okay. So let's go on to the next one. Yeah. Okay. Do you want me to I'll cover this one. Yeah. Yeah. Go for it. Yeah. I once again, in the previous video, in this series, I actually covered off some of the use cases. So this is slightly differently laid out, but exactly the same. Once again, for those of you that saw that video, notice it's broken down by easy, medium, and hard. And Jamie and I had a discussion about, those words and whether I was correct to use them because it's probably more of a skill set in surface command maturity than it is easy diff meaning more difficult. So the reason the easier ones are there is that they're easy to configure, they're easier to identify. So things like, realizing tooling, value of tooling, or what I'd like to call efficacy is, efficacy is, you know, easy to do in the platform by just filtering asset pages. Coverage gaps is even easier. Just tell me where particular technology is not installed on my Windows service. Very easy to do. And then to generate, dashboards off those, once again, very simple to do. So when I specify easy, medium, and hard, it's probably more about what's required in the platform to to provide and show this value. Things like mitigation projects, very common. I had a POC the other day, I don't know, where we're ingesting SCCM and Intune because they're in the middle of migrating from one to the other. So, you you know, I had a a very simple Venn diagram showing how many assets were still, being managed by SCCM versus Intune, and you're slowly seeing that change. And then one of the other ones is things like CMDB reconciliation, which is actually just a coverage back gap. Show me assets that are not being reported by the CMDB, but a very common use case. We're not trying to replace the CMDB. What there's a huge amount of value in having a CMDB and us having that data. But often, as we all know, CMDBs, they can be a bit of a dirty word. They're out of date. So just having that reconciliation is really important. The medium and hard, the reason they are probably that way, certainly for medium, typically require more context. If you wanna start prioritizing, I wanna patch all my critical vulnerabilities and I've got 20,000. Where do I start? Well, let's look at the assets that those critical vulnerabilities are on and understand the context of those assets. Where are they located physically, logically? Do they have Internet access? What type of business context do they have? Are they running tier one mission critical applications? That would certainly define them as, critical. What users are accessing them? And then obviously, other context like that third party threat intelligence we talked about. So, you know, in order to prioritize, you need more context. So that's why it's probably set at medium. Also, the other thing here is then you start using a different way to query the platform called the query builder, where you can leverage the power of surface command and those relationships to say, tell me assets that have a relationship to a vulnerability and assets that have relationship to a network, etcetera. And then the hard ones are really listed as hard because you need an understanding of Cypher. Cypher is the, you know, standard open graph based date, language that you can query a graph database. And then it and they it's hugely powerful, enables you to do computations on the fly and calculate things like risk. So it's it's wonderful in that regard, but obviously, there is a learning curve there. So maybe hard is the wrong word, but there is a learning curve associated with more advanced use cases. As I said, I've covered that before, so I don't wanna go into too more too much detail. But I think the easy ones are definitely the low hanging fruit, high value, easy to set up, and it should be easy for you. You understand your customer environment. You understand the technology. You understand what they're trying to achieve. You know, great to go after those ones. I I guess talking about the medium stuff as well, where you're introducing things like business context, you can obviously go through exercises where you start to understand the types of software that they should and shouldn't be using and what assets are permitted to use certain pieces of software and what assets that aren't. So And this relates back to what you yeah. Exactly, David. This relates back to what you were saying at the beginning about understanding the process, you know, understanding, oh, we wanna we we, you know, we really need visibility of x. Why? How are you doing it today? Where do we get it from? You need to understand that whole process in order to build, you know, effectively a service around it. Yeah. 100%. And, yeah, you you start seeing, different remote management tools in your infrastructure or different file file, let me, start that again. Different file transfer tools being used as well, on unsanctioned assets. And that, I guess, allows you to spot potential attack vectors or potential attacks more easily. Yeah. Definitely. I mean, yeah, let's cover that off then. I mean, we've talked about, you know, some of these are the use cases, but, you know, there's some work that's gone in before these to understand how they get to this end state, what's important to them, where's the value, what data sources are needed, what properties in that data do we need together combined to give us that view? So do you wanna talk a little bit about how you would go about mapping mapping out, you know, the process, you know, the end beginning states, end states? Yeah. Absolutely. So, in conjunction with this, we have provided a bit of a template here. So we've got our service, dis sorry, discovery and service design workshop, template so we can use this to help us understand how the people, processes, and technology are kind of working together. So in theory, you would map the process, along the top here. We would then map where the technology interacted with the process, where the people and teams, interacted with the technology, and then any kind of ideas for improvements or standardization, which would then help us spot anomalies within the infrastructure as well. So creating this service map is a really good exercise to to try and achieve that. Now if we look at the supporting doc, the process map here, we have an example mapped across the top here. So, our example comes from our goal, so to increase visibility of controls across our environment. The our process is if a coverage gap is discovered, the team investigates and and collects, verify the information. We have a review. We then have the security team log in a ticket with, IT operations. So that whole kind of process in each step along the way. And then here, I'm mapping, the technologies that would be used in in those instances. I've even split out the security team, IT operations team, and where the partner would also get involved within that process. So as a partner, you can look at the customer's process and see, where you can, supplement that. Obviously, you don't is it? Yeah. Yeah. Yeah. Absolutely. You you obviously don't wanna, inject yourself too much into a a customer process to to create a bottleneck, but, absolutely, you as a security partner, you should be thinking about how you can help to enhance that. So here's a a kind of example, and then we've also got a, an additional goal. You can download this document, think of either your own business or, an example customer that you've worked through, and then think about this goal here. So to create an ongoing process to ensure EDR is deployed across all users, and all services within the infrastructure. And so with that example, you would map the process, map the technology, where it interacts, and where the teams would get involved at at different points in the process. So I I would definitely recommend running through a theoretical example of that and seeing how it how it turns out. Yeah. Okay. Some of the other methods that you can use for building a a process map, this is is quite good for, showing to customers because it's more visually compelling. So you saw in the earlier part of the presentation, we had the process map of how an identity moved in and out of the infrastructure. This, way of building a process map through stocks and flows is something that's been used across, many organizations. A big advocate of of this kind of method is, Toyota. They they came up with the, Toyota, systems, way of thinking. Things like just just in time, delivery for, parts, etcetera. But these concepts can be, directly translated to other, areas of business, other technologies. And so a a stock in this instance, in our example, was just the the identities that we had within our environment and then how they moved in and out of the infrastructure. The flows are how that would flow around the environment, and then we have these little facets that would be any kind of, things that we can turn to limit or open the flow depending on what we wanted to achieve. So this could be, let's say, in order to configure an identity, we need a team involved. Now to open that facet more, we could increase the size of the team or we could tooling that that team used to make that faster. If we wanted to, put a limit on that so that only a certain amount of identities could flow through at any one time, then we can think about how we might limit that. And, actually, you could do that through keeping the number of, identity tickets down to a a certain amount per day. So there's things that you can think about as to how we can make these changes to how things flow in and out of the infrastructure. We can use the clouds here for hidden complexity. So, once the identity was created, we didn't need to know about that that part. And after it was removed, we didn't need to know about that part. We just needed to know that, it was our IT operations that, were the key bottlenecking, configuring, and removing. And then, of course, we had our, our process that was reviewing the, quality control and then feeding that back up the tree in terms of creating a new, ticket to go and review that that configuration. So definitely use this, as an example. And then you see here we've I've got, a YouTube link. This is actually a video of how, of this diagram being comprised, and how that might work. So the kind of visual mapping of this is really compelling for customers, because it helps them understand more how things flow in and out and actually where the different teams' technologies would interact throughout that process. Yeah. Definitely very useful. I'm just conscious of time, David. Yep. One of the things I've done in deployments before, you and I have talked about this before, and I've seen other partners use, is having, you know, the right supporting documentation to have a phased approach, almost a road map to success for their customers. Do you wanna talk to me a little bit about, you know, that that process? Yeah. So to, I guess, to coin a a very overused phrase, Rome wasn't built in a day. Right? But when it the ocean, we could keep going crazy. Yeah. Yeah. Yeah. Absolutely. And so, like you said, throughout this presentation, we we've spoken about, okay, what what do you need to do in the first instance, and then what happens as you, as you mature over time? So the next thing we we want you to think about is having a, like, a road map document for your customers to how to improve the environment over time. We recognize that in the initial project phase, they're not gonna have the the, the budget or even the, metrics to get to a point where they're they're really mature with it. And so what this, gives you is different points over time where you can have additional project work, additional, process work to, number one, help that customer on their maturity journey. But as a partner, it gives you additional points to cross sell professional services into that customer. And think of this as like a a living working document that would evolve over time. So this again corresponds to the, document we're gonna provide here. So this again is just a rough framework. You can take it. You can, carve it up, do whatever you like with it. But, what this is geared up to is looking at how you would work with the platform, add in additional services over time. So for instance here, we would, in our initial installation, we would, connect information sources and address those key coverage gaps. You'd have, details on ROI, business impact. Additionally, what this helps with is, anyone within the, your partner organization, for instance, sales related, if they wanted to go back and engage with this, with this existing customer, then they would have talking points as well, that would be extremely relevant on those calls. Over time, we then look to create an ongoing service to address these coverage gaps at maybe more of a root cause level. So step one is we install surface command. We see the coverage gaps. We address them. But as we kind of said previously, those misconfigured identities, those vulnerable assets, they didn't just appear out of nowhere. This has happened over a period of time due to, processes and training and how people interact with the technology, tickets between teams, lots of different, what I would call socio technical reasons as in because the business as a whole is interacting with different technologies and different departments. And then, you know, three to six months, you might have a different goal. So definitely take this and think about how you might map out a journey, for a potential end customer as well. Yeah. I've seen a lot of these recently, Dave, because I'm I'm involved in a lot of customer deployments and at least scoping, due to my knowledge of of the technology. And, you know, it's common to have, like, a phase one immediate value to your point, things like coverage gaps, tooling, efficacy. It's almost that easy, medium, hard type aspect. You know, phase two would be a great addition of add bringing in additional data sources, where maybe they wanna then under start understanding how they can prioritize vulnerabilities. And then phase three is, you know, we wanna not only prioritize, we wanna start measuring our environment based on risk. So, you know, these can lend themselves also to the capabilities of the technology, very similar. Yeah. Yeah. So I guess from a partner perspective, what like, why are we talking about this? Well, one, because it leads to happier customers, but also it leads to, I guess what what I would say is happier customers over a a longer period of time. You you increase the customer's lifespan for you, if that makes sense. And so this leads us into what what could a successful customer journey look like for Rapid7 partners. And this this is something just theoretical, but, I'm I'm sure, Chris, you'll be able to advise from from experience, if we take some of these examples. So, for instance, deploying surface command, you mentioned about efficacy of tooling. There must be some examples where EDR licenses have had to be increased as a result of, installing Surface Command? Well, in actual fact, both interestingly enough, yes, increased where basic coverage gaps necessitate the need to put an endpoint on those systems, things like shadow IT orphan systems, areas of the business where you just you know, the defender team weren't aware of it, and so they haven't put the agent on. But, also, the flip side of that is often data sources are can be incredibly noisy and messy. So the console may tell you you've got 5,000 assets. Well, in actual fact, you've probably only got 4,000 that are actually active. The other thousand were old record systems that no longer exist. So it actually can work both ways. And some we've actually done that before where the license count has gone down because the actual active count is completely different from what's being reported in the console. So it works both ways. So it wow. So it helps you tidy up your licenses across multiple tools, not just your EER, I guess. Yeah. Definitely. Which is obviously great for an ROI point of view. Yeah. Which leads to happier customers. Right? Oh, yeah. So in in theory, here we go. CSAT has increased. Hopefully, at that at that point, the customer would see the benefit in renewing the service command contract, which then obviously gives us the ability to cross sell more services. And not this wouldn't just be Rapid7 services either. What what we wanna be able to get to is partners see Surface Command as the tool that's gonna keep them keep their customers, because they get better visibility across the environment, but also increase the value of those customers as well over their lifespan. Yeah. I'd say one other thing, to note, and it's it's it's not missing on the slide. It's it'll be another step because this is just an example. And I've talked about this in in previous videos, but we don't wanna be another dashboard to pay. Yes. It's great to identify MFA coverage, but to David Higgs's point, there's action here. You know? MFA deployment increased by 30% action. EDR licenses increased action. So the ability to take action on these insights that you you work with your customers to generate in service command, you know, shouldn't be the end state. The end should end state to David Higgs's whole presentation here should be the process behind that to actually go off and remediate or fix or update. And, you know, that action is really important. So leveraging things like icon, you know, hugely beneficial to actually take action on all of these. Yeah. Yeah. Absolutely. Cool. So David, do you wanna just take a so I know I know I'm pretty sure I know we're doing video in this series. They cover up the resources, but it it it is also really important that they're aware of them. So just in a couple of minutes, could you just summarize the resources for us? Yeah. We're we've got some great stuff happening within the partner program. So, to support our partners, the Rapid7 partner portal has this wealth of tools, resources available, to assist with your packed, program progression. Sales, marketing tools, registering your deals, progressing opportunities, checking on renewals, the portal also includes access to our partner academy, training, and important certifications. Some new technical ones are coming very soon. Some of the key exposure management resources also are available on the partner portal. Please feel free to leverage these, in your conversations with prospects or even actually just for your own learning. In order to get to your, training, go to the partner portal you see here on the left hand side, you go to partner academy. Literally, just click on, start your journey, and it will take you through, to your learning journey. You'll see here we've got different learning, journeys for depending on who you are within your organization. So if you are a sales rep, that would be the sales professional. Or if you are technical, then you would go for the technical sales professional. We do also have, as I say, more more technical level certifications Yeah. That would be more aligned with, like, technical sales, but also post sales, individuals. But let's say you're a a long time Rapid7 partner, you can just test out as well. The ultimate goal here, obviously, is is also to drive impact and business growth for everyone. As you identify opportunities, don't forget, please, please, please, register your deals on the partner portal. It's very quick. We've made it extremely simple, and it's the best way to protect your deal, securing it to you and your partner business, and the easiest way to garner the highest discounts available. 100%. So important. Yeah. So thank you, for all the questions during this session today. If you need any further help, please don't hesitate to reach out to your channel account manager or partners@bravo7.com. I'd just like to add a quick reminder. Please register for the other sessions. So we got the one final session coming up, which is the part two, where Chris will be covering how to do things more in practice. Don't miss it. All the details can be found in the partner portal. Also, please be on the lookout for regular Rapid7 partner business communications, which detail product and solution launches and improvements, important partner program updates, and information on all upcoming new and on demand webinars available. And with that, thank you for joining us today. Thank you, David, for that. It was really insightful. Great to see the process, and we'll put that into practice in the next video. But, thanks for your time. Thank you. Bye, everyone. You, everyone. Cheers.