Enabling Compliance and Security in AI-Driven, Low-Code/No-Code Development

Low-code/no-code development offers a lot of opportunities for companies across sectors, but it can also bring new security risks and compliance concerns.

AI is rapidly changing the way that people develop and build their own apps, automation, and copilots, helping enterprises improve efficiency and outputs without further straining IT and the help desk. While this is leveling the playing field for software development, it also brings increased cybersecurity risks.

For security leaders, it’s important to understand this new wave of business application and AI development and the subsequent risks – and to have a game plan for how to address them. The good news is that you don’t have to choose between AI-driven development and security/compliance. 

The Rise of AI in No-Code/Low-Code

AI, low-code, and no-code platforms have become the default for democratizing development; they give any business user developer-level power whether they can write code or not. Stemming from a combination of lack of internal resources, time constraints, and the need to constantly innovate, organizations are increasingly depending on these technologies to make business users more efficient and productive. 

Gartner has predicted low-code/no-code development to be responsible for over 70% of all new applications by 2025. The analyst firm also predicts that by 2026, over 80% of organizations will have used GenAI application programming interfaces (APIs) or models and/or GenAI-enabled apps in production environments. This will represent a massive shift since less than 5% did so last year. So-called “citizen developers” are creating data flows, automation, apps, and more just by asking a copilot – a GenAI conversational interface – to build it. They’re also now even able to build their own copilots that can then be publicly shareable across the ‘stores’ of the development platforms. 

Productivity Increases, but Security Is Left Playing Catch-up

There are two main risks at play here. First, it is no longer dozens or even hundreds of apps that are being introduced into production environments, but tens and hundreds of thousands of new applications, connections, and automation are being created by users of all technical backgrounds. This only adds to the threat landscape. Primary threats include data leakage and account impersonation, which I’ll explain shortly. Second, there are many default settings incorporated by the platforms that are well-intentioned with the goal of making it easy for anyone to build their own apps. However, it also makes it very easy for mistakes to be made in the development process, which can keep security professionals up at night. 

Typically, when companies have security and compliance programs, they target the work being done by traditional professional developers. But today, with AI, everyone can be a developer. People are now creating apps and automation outside the purview of IT. They’re able to just build what they need on their own without the technical knowledge required, and that’s a big change.

These activities taking place within the business lines aren’t always being monitored or tracked – and that creates a problem when it comes to security because one of the biggest needs for truly securing everything within a company is visibility. You can’t protect what you can’t see. 

This is a problem for almost any company, but especially for those in heavily regulated industries like finance and healthcare that are subject to rigorous regulations and compliance standards. As more people are creating apps and using AI, more systems are requiring access to sensitive data. Without a thorough inspection of who is creating what and determining which apps are accessing really sensitive data, it can leave them open to new fines and increased regulatory scrutiny.

Taking Back Control Without Sacrificing Productivity

It might be tempting to try to prohibit employees and third-party (aka guest) users from using these tools altogether to avoid the security challenges, but that’s not realistic. People are going to find ways to access the tools they need; prohibiting these tools isn’t likely to work, and it could stunt innovation, hamper efficiency, and slow productivity. Increasingly, security leaders need to show how they are part of the business enablement strategy rather than gatekeepers. 

Instead, it’s about how to make the use of these tools safer. As with most security, gaining visibility is the first step. Your security team needs to know about the tools being used and the applications being developed while gaining a deep understanding of the business impact that each application has in the enterprise. 

Getting this visibility – and ensuring the security team remains looped in and able to process and act on those insights – requires these elements:

  • Identify every instance where an app contains AI and/or where AI was used to help build a resource. In addition, develop a knowledge baseline regarding the business context of every one of those resources. This includes who the users are, why they use the resource, what data it interacts with, and so on.
  • Ensure that automation and apps that need to access sensitive data have the right data sensitivity tags, as well as the right authentication protocols, identity, anomaly detection, and access tools.
  • Evaluate each resource for threats to help security teams know how to prioritize violations, alerts, and more.
  • Make sure that every app is shared only with the appropriate people. Many modern development platforms use default permissions that let anyone in the tenant or directory gain access to and use these apps.
  • Prioritize security; set up rules and connect with actual and citizen developers to ensure they meet the organization’s standards as they develop with GenAI.
  • Implement ongoing vulnerability scanning to detect misconfigured and/or insecure apps as they are being built.

A Secure Development Strategy

Citizen development offers a lot of opportunities for companies across sectors, but it can also bring new security risks and compliance concerns, especially now that AI is so readily available and used. As these technologies become the norm, organizations need to know who is developing what so they can maintain security and compliance. Visibility is key, so use the list of required elements to ensure not only security and compliance but productivity and efficiency as well.

We Provide consulting, implementation, and management services on DevOps, DevSecOps, DataOps, Cloud, Automated Ops, Microservices, Infrastructure, and Security

 

Services offered by us: https://www.zippyops.com/services

Our Products: https://www.zippyops.com/products

Our Solutions: https://www.zippyops.com/solutions

For Demo, videos check out YouTube Playlist: https://www.youtube.com/watch?v=4FYvPooN_Tg&list=PLCJ3JpanNyCfXlHahZhYgJH9-rV6ouPro

 

 If this seems interesting, please email us at [email protected] for a call.



Relevant Blogs:






Recent Comments

No comments

Leave a Comment