blog に戻る

2024年01月30日 Joe Kim

Generative AI: The latest example of systems of insight

Generative AI

It’s safe to say that Generative AI and the launch of GPT4 caused the most excitement – and fear – in technology in 2023. It’s not surprising with its possible areas of application and ease of use. Even schoolchildren are using it!

It’s been the main topic of conversation at major tech conferences and publications, company boardrooms, within product and engineering teams, and even with families at dinner tables – but maybe that’s just my family’s dinner conversation. With so much interest and concern, no one should be surprised when questions of efficiency gains are asked this time next year to every team across the business.

So what does it mean to be prepared? At Sumo Logic, we’ve been asking ourselves these questions as a starting point to make sure we’re building and bringing value to customers in 2024 and the years ahead.

AI hallucinations: Are they a feature or a bug?

In a recent product meeting, one of my data scientists remarked that they didn’t even know our queries could work this way. Of course we must ask, what about Gen AI enables this level of creativity? I argue that this is because the technology can hallucinate and try things that are considered outside the norm. If we treated hallucinations as a bug to be fixed, we could accidentally kill the technology’s ability to bring creativity, insight, and even innovation to the table.

Most conversations about AI hallucinations are causing fear in the market, but I argue that the true culprit isn’t the hallucination but the tone. The biggest issue with ChatGPT is that it represents its answers so definitively. Typically for ideation and brainstorming, you need to be creative and leave room to be wrong. 

When we stop treating our AI brainstorming partner as an arbiter of truth, but rather as a partner in creativity, we can work toward solving incredibly hard new problems for customers using the natural strengths of this technology, which includes its hallucinations.

Systems of record vs systems of insight

If we accept that Generative AI isn’t the source of a definitive answer for our customers, what additional technologies do we need to use? This is where systems of record become vital.

A system of record is the authoritative data source, fact or piece of information. It’s what powers the typical IT system today, for example, logs, databases, service maps, integrations, tribal knowledge, etc. In this case, the more data you can connect to the system, the more powerful it can become. 

Also, as it is the authoritative piece of information, a “source of truth” it must be 100% correct. But it’s hard to manage and leverage in real time, and that’s why auto-generation and auto-discovery for systems of record are so critical. 

While the phrase “single source of truth” has been around for a long time, for developers, security and operations teams, it’s often far more nebulous. That’s why logs have emerged as vital, serving as the atomic level of truth – the log is the only artifact that is naturally and automatically generated by applications and infrastructure, making it an obvious foundation for a system of record.

Meanwhile, I consider generative AI as something akin to a “System of Insight”, where technology can give you additional insights you didn’t have before. 

For example, in a previous role, I needed to do a 60-minute presentation about “innovation” to a room full of interns and associates. Without any idea of what to present, I did what any of us do. I went to Google and typed “innovation” into the search bar before scrolling through tons of websites, pictures, and videos. Ultimately, I discovered a picture of a turtle with F1 racing wheels that served as part of my presentation. 

While I didn’t search for “turtle with F1 racing wheels”, it inspired my work – just as Google Search inspired my thinking, AI can use its system of insights to provide thought partnership that goes beyond search alone.

By combining systems of record and systems of insight we will create new solutions that can give customers definitive answers on their own data.

What does this mean for developers, operations and security engineers?

It is both an exciting and dangerous time in technology and software. I think folks like Microsoft and GitHub have it right when they use the naming convention “copilot” as the name suggests that the technology is helping you do your job better rather than trying to replace you. 

I can foresee similar copilot innovations emerging in areas like testing, observability and security as the critical connection between systems of record and insight are solidified. It could mean faster onboarding, especially when it’s easier to ask more natural and deeper questions of your systems like “What can I do to improve the cost and security of my infrastructure?” It could also help automate redundant tasks, particularly when it comes to reporting.

The dangers are also apparent as AI introduces potential security and legal issues. At a recent innovation roundtable (no F1 turtles this time) I attended, the top concern executives had were about the legal ramifications of AI. Depending on how the LLMs and FMs are trained, there are potential legal concerns, notably around training your models using other people’s IP. This brings up an entire supply-chain question, including IP and intentional poisoning of information or code. That’s not to mention the “direct threats” when AI is used for social engineering or to write malicious code or malware.

To maximize upside and minimize risk, all leaders of developer, operations, and security engineers should be instituting basic best practices now.

  • Where in your code should you use AI, or not? Will you be using generative AI to help you write code in your core IP? The most important practice is to put policy in place. Code can quickly proliferate through your product, and you’ll want to have visibility into how much of that was AI-assisted. 
  • Where will you get the most benefits from AI? Pick the specific areas that would benefit the most from combining systems of insight with systems of record and train the LLMs to get better with just that code base first.
  • Plan for your systems of record and systems of insight to align for future business success. While these processes and systems are in their nascent stages of interconnecting, unlocking their combined potential will be critical as your organization takes the next technological leap.

At Sumo Logic, we’re thrilled to embrace technologies like generative AI. More and more DevSecOps teams are realizing that logs can serve as the common language and system of record for team collaboration – adding a system of insight to it will make its impacts exponentially greater. We already have the industry’s most scalable log analytics platform, which is becoming the foundation for breaking silos and building collaboration.

It’s going to be an exciting 2024! 

Learn more about AI and log analytics.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Joe Kim

Joe Kim

President & CEO

Joe Kim is the President & CEO of Sumo Logic, with over two decades of operating executive experience in the application, infrastructure, and security industries. He is passionate about helping customers address complex challenges through the delivery of powerful and efficient technologies and innovations.

Before joining Sumo Logic, Joe was a senior operating partner for Francisco Partners Consulting (FPC), assisting in deal thesis, assessing product-market-fit and technology readiness, and helping portfolio companies create value for customers and shareholders through advisory, board, and mentorship activities. Prior to FPC, Joe served as the chief technology and product officer at Citrix, where he was responsible for strategy, development, and delivery of the company’s $3.2B portfolio of products. Joe has held other senior executive roles at SolarWinds, Hewlett Packard Enterprise, and General Electric. Joe currently serves on the Board of Directors of SmartBear and Andela. Joe holds a B.S. in Computer Science, Criminology and Law studies from Marquette University. During his spare time, Joe enjoys spending time with his family.

More posts by Joe Kim.

これを読んだ人も楽しんでいます