While at RSAC 2025 Cyber Security Tribe was fortunate to spend time with two thought leaders in the field of AI within cybersecurity, Piyush Sharrma, CEO & Cofounder of Tuskira and Chris Kirschke, Field CISO, Tuskira who answered our questions around how AI is impacting cybersecurity in 2025. We started the discussion off by asking:
How do fragmented security tools lead to blind spots in efficiencies in organizations?
Piyush Sharrma - Fragmented security tools: you think of them as the security guards who are working at different doors, the back doors to the front doors to the side doors, but they don't have any communication with them. These fragmented tools have one significant issue, when these tools bring the perspective, it's their own perspective. It's not the customer perspective. In order to build the customer perspective, all these tools need to talk to each other, find out what the real threats are and what are the hoaxes. That is a fundamental issue of any security tool.
Additionally, these tools are as good as they're being configured. If customers do not have the ability to identify how these blind spots are translating into security issues, there is no value of having these tools in place. You may be using best in breed controls, but they are static in nature, meaning they require constant modifications, constant adjustments, constant customization. Today, that's driven by humans, but in the world of AI, it needs to be driven based on the data. That's the fundamental issue with the fragmented tools, they don't have the full picture. They're still looking at the elephant in the blind man's story. They look at the trunk, they tell it's a snake. The second is they cannot be customized based on the real threats that an enterprise is facing.
What steps can CISOs perform to enhance security posture without replacing their current tools?
The first and most fundamental is to bring the security data in one place so that you can build some understanding of 'What is your enterprise facing? How your products and your applications, your application that your customer's customers are using.' Then you run more algorithms, more smart analysis, and identify how you respond to the threat that really matters. If I compare any EDR solution, whether it's CrowdStrike, Palo Alto, SentinelOne, they are as good as they are when they running on one particular endpoint, but they do not know how the enterprise firewalls are facing the threats. This is the first and most fundamental thing the CISOs can do is bring the context together, build understanding on that context to identify what are the real threats they need to respond to, and the 'how' will be the later part.
What do you think are the biggest hurdles for a CISO when implementing AI currently?
Piyush Sharrma - The biggest problem is trust. There are so many myths and misconceptions around what AI can do, what AI is, whether AI is reading my data? It's actually a problem that is very simple and really not complicated. But the most fundamental part of all this is working with the vendors who are "AI first", who are "AI native" and not AI retrofitting the AI. Meaning when cloud migration started, there was a term coined called "cloud native". This means the software and the applications were built for cloud from cloud. Similarly, it has to be AI native. These are the products like Tuskira, which are built for AI from AI.
Chris Kirschke - I think the other thing for CISOs is it's yet another term. Right? When you look at it, we were just getting through SOAR and now we're just getting into AI. The issue of being attacked with AI faster than I can get AI deployed inside of my environment. There's some CISOs who believe they are covered as AI is being added into the product and therefore they don't worry about it. However, how do you combine all that holistically? You really can't. At the end of it, you still need a centralized platform to orchestrate all of that AI. You have to do that based on the personas of the staff that you're not augmenting, but supporting. I think that's critical.
With the different AI systems, do you think there's a problem when they have different data sets? Surely we have an issue if three different data sets all get lumped together? Is that something that needs to be resolved?
Piyush Sharrma: Absolutely. Data, AI is all about context. Context is all about data. If the data is not rightly structured in the right place, your AI output is not going to be as effective as you want them to be. This is why when you bring the data together, there has to be a strategy of 'why I'm bringing this data'. If this data is going to be used by AI, then the AI and the data needs to be friends with each other. And this is what I mean by having an AI native; you have to bring the data AI native, build the models AI native, build the software AI native. It cannot be rows and columns being retrofit just to be an AI-powered platform.
How can AI predict potential attacks in 2025? How is it helping?
Chris Kirschke: I'm going to use our term, digital twin here. Security, application developers, engineering teams, infrastructure teams are rarely all on the same page, they rarely all have the same body of knowledge. I think, this year, you will see a lot of that AI start to bring that context together where I now know how my application is actually deployed on cloud. I know the components of that application and where they're deployed. I know what the last run was from a CI/CD perspective. I know the state of the application. Now I can come to the table and I can say, I have a digital twin. I can see what the impact of changing a configuration or applying a policy, and what the actual risk reduction is at that point. I don't have to have four meetings,10 Zooms and a bunch of backhand negotiations.
Piyush Sharrma What Chris mentioned is very important, which is the knowledge. AI is all about knowledge, right? What AI is doing by analyzing a lot of data is building a knowledge graph, building a neural network. This neural network is nothing but the reasoning capability that AI can do, which humans cannot. Today, all the products, every best in breed product that you have today is built with the user in mind, which is a user has to be able to understand how I will be attacked so that he can mine and hunt the data whether I'm affected by that attack. This is where AI is actually going to replace the ability of hunting: my models, my agents are self-trained purpose-built, they can predict more scenarios, more breach paths and attack paths and they will be able to surface them faster than humans have ever done. This is the fundamental difference and this is why 2025-26 will be about predicting the attackers next move like AI is about predicting the next word. My belief is the threat hunting's ability to predict more scenarios on how you can be breached will actually get accelerated.
Chris Kirschke: I think you're going to see it more in 2026. Right now, everybody's doing more with less and they're trying to keep up. They're seeing that AI can do that, but you still have to upskill your teams. You have to upskill the mentality of your organization. You've actually still got to realize the investments you made over the last few years. You've got to get the money out of that and the ROI there. You will see more centralized platforms. Other CISOs now tell their teams take a 30 day pause. You're going to go learn how to deploy and operate MCP server and build agents. The agents will provide more bandwidth for the CISOs team.
Where do you think AI within cybersecurity needs improving as a whole?
Chris Kirschke: The two pieces that I always hear about, and you'll hear about it this week as well, is the identity that the agents are using. We've been trying to solve principle of least privilege for 40 years and the principle of least access as well. What data can that agent actually go and get to provide context or act on? I think those are the two biggest risks that I see in here being talked about all the time. Can it go use that data? If somebody gets to that agent, what else can it do? What's the blast radius of that?
Piyush Sharrma: I'll take a more ambitious and bold statement here. I think AI is all about augmenting human themes with the productivity that we have never seen before. What a human could do in one day, AI is able to perform in minutes. They have all the context, they have all the data, they don't have to really pivot the spreadsheets. The productivity improvement is going to be the biggest outcome of AI, which also translates into cost, which also translates into time to respond, and which also translates into accuracy, meaning reducing the false positives, increasing the productivity, reducing the cost and taking minutes versus weeks to respond to a problem. That's a very sizable, very measurable outcome that you can actually see in 2025.
Do you think the agent might reduce false positives?
Piyush Sharrma: Today's AI is still keeping the human in the loop, which actually is an anti-thesis of building the self-learning AI agents. I believe that it's important to build the trust. Essentially, as you start to reduce human intervention, decision augmentation should be the focus, not stopping augmentation. Everybody looks at AI agents as 'they'll be part of my AI team, we have virtual AI analysts, they will help me do things', but they should look at it as 'helping me in augmenting my decision'.
How can AI accelerate remediation within organizations?
Piyush Sharrma: Remediation is a data problem again. For every problem multiple teams get involved, from developers to DevOps to security ops, who are analyzing if it important enough to remediate because remediation comes with a cost. AI, in my opinion, can actually help find the root cause analysis, which is the long tail of actual remediation. The root cause analysis is where I need to fix when the problem gets reported; there are 10,000 of those problems reported, what is the root cause of this, which is the container, which is the repo, which is the code. I need to fix that problem. I think AI can actually do that analysis better than three people.
Chris Kirschke: I think one thing that will be very helpful is seeing how well AI does from extracting business logic out of an application. I've had many incidents in my career that have nothing to do with the vulnerability.
Usually those are the shoot yourself in the foot moments where there is no expression of business logic anywhere. There is no ability to go find it until it rears its ugly head. All of a sudden, a Lambda function generates a report, sends it to the client. Then the client calls you and says, "I have 50 clients for the data in my report." Which could be due to forgetting a parentheses on a filter in the query, which is something you can't really test and you can't leave it to your VM team to find that. That's where AI has a really good opportunity: to start to extract business logic out and then laying that on top of what the actual application deployment and infrastructure looks like.
How do you AI is going to be used the next five years? How is it going to develop?
Piyush Sharrma: AI is a friend. Let's assume and let's believe that AI is a friend. It will take our eyes off the things that has slowed security teams down for the last 20 years. It is going to shift the problem from one place to another. However, one thing it is definitely going to do is accelerate the team's velocity significantly. It will also bring down the cost very significantly. So my bet is on cost reduction, that the security team will have the opportunity to reduce the cost significantly by leveraging virtual AI analysts as part of their teams, pushing the skill sets, pushing the team to a higher degree of the work than the lower degree of the work. The layer one, layer two teams are going get disrupted very soon. Layer three, that's where the change will get introduced.
Chris Kirschke: I think it presents something that has been talked about for a long time. How does information security get out of the SOC, out of the engineering mindset and into the business? We haven't been able to do that because we're fighting fires. We're trying to keep up. We're doing more with less. Given the right implementation, it gives the next generation of InfoSec leaders the ability to rethink how programs are built. Deploy AI to be the most efficient, cost effective security program out there, but spend more time with the business. Don't walk in and say, let's talk about firewall rules. You have to walk in and say, let's talk about enabling our digital transformation. You want to launch a mobile app, let's go do that. I can show you how to support you in that effort so that we don't have a secure mobile app. I think that's the opportunity that really needs to be addressed or I would say almost adopted.
If you want to catch up with Piyush Sharrma and Chris Kirschke you can go via their booth at RSAC 2025 'Booth #N-5371'.
Share this
You May Also Like
These Related Stories

Insider Threats: How CISOs and HR Can Collaborate Effectively

Using a VPN: Security, Privacy and Performance Concerns
