Category Archives: Technology

What I Learned Spending Weeks Researching Innovation Frameworks

I went down a rabbit hole trying to find the “right” innovation framework. Turns out most of what I believed about innovation was completely wrong.

I spent weeks studying MIT iTeams (for breakthrough tech exploration), Cascading Tree (for strategic alignment), GInI (for systematic enterprise innovation), and Scott Berkun’s The Myths of Innovation. Each framework works in different situations, but none is a silver bullet. And Bell Labs’ history taught me something crucial: pursuing an idea takes fourteen times as much effort as having it.

The biggest lesson? I was waiting for the perfect framework before starting, doing exactly what Berkun warns against. Looking for some system that would remove all uncertainty before I began.

Turns out innovation frameworks are useful tools when matched to the right situation and cultural context. But they can’t replace the courage to start imperfectly, the persistence to keep going, and the genuine curiosity to explore problems worth solving.

[Read the full post on Substack →]

Includes a free info-graph with practical scenarios for each framework, diagnostic questions to assess organizational readiness, warning signs of when frameworks fail, and actionable first steps you can try this week.

Tagged , , , , , , , , , ,

Making GARAK’s LLM Security Reports Actually Useful

Lately, I’ve been running security assessments on various LLM applications using NVIDIA’s GARAK tool. If you haven’t come across it yet, GARAK is a powerful open-source scanner that checks LLMs for all kinds of vulnerabilities, everything from prompt injection to jailbreaks and data leakage.

The tool itself is fantastic, but there was one thing driving me crazy: the reports.

The Problem with JSONL Reports

GARAK outputs all its test results as JSONL files (JSON Lines), which are basically long text files with one JSON object per line. Great for machines, terrible for humans trying to make sense of test results.

I’d end up with these massive files full of valuable security data, but:

  • Couldn’t easily filter by vulnerability type
  • Had no way to sort or prioritize issues
  • Couldn’t quickly see patterns or success rates
  • Struggled to share the results with non-technical team members

Anyone who’s tried opening a raw JSONL file and making sense of it knows the pain I’m talking about.

The Solution: JSONL to Excel Converter

After wrestling with this problem, I finally decided to build a solution. I created a simple Python script that takes GARAK’s JSONL reports and transforms them into nicely organized Excel workbooks.

The tool

  1. Takes any JSONL file (not just GARAK reports) and converts it to Excel
  2. Creates multiple sheets for different views of the data
  3. Adds proper formatting, column sizing, and filters
  4. Generates summary sheets showing test distributions and success rates
  5. Makes it easy to identify and prioritize security issues

Here’s what the output looks like for a typical GARAK report:

  • Summary sheet: Shows key fields like vulnerability type, status, and probe class
  • All Data sheet: Contains every single field from the original report
  • Status Analysis: Breaks down success/failure rates across all tests
  • Probe Success Rates: Shows which vulnerability types were most successful

Why This Matters

If you’re doing any kind of LLM security testing, quickly making sense of your test results is key. This simple conversion tool has saved me hours and helped me focus on real vulnerabilities instead of wrangling with report formatting.

The best part is, the code is super simple; just a few lines of Python using pandas and xlsxwriter. I’ve put it up on GitHub for anyone to use.

Wrapping Up

Sometimes the simplest tools make the biggest difference. I built this converter to scratch my own itch, and it’s been surprisingly effective at saving time and effort.

If you’re doing LLM security testing with GARAK, I hope it helps make your workflow smoother too.

GARAK – JSONL to Excel Converter

Also, check out my second tool: GARAK Live Log Monitor with Highlights. It’s a bash script that lets you watch GARAK logs in real-time, automatically highlights key events, and saves a colorized log for later review or sharing.

Would love to hear your feedback!

Tagged , , , , , , , , , , , ,

Introducing SchemaWiseAI: The AI-Powered Solution for Seamless Database Query Mapping

Introducing SchemaWiseAI: The AI-Powered Solution for Seamless Database Query Mapping
AI-Generated Image

In today’s data-driven world, businesses and organizations generate vast amounts of data every day. Cybersecurity analysts, data engineers, and database administrators are increasingly turning to Large Language Models (LLMs) to help generate complex database queries. However, these LLM-generated queries often don’t align with an organization’s specific database schema, creating a major headache for data professionals.

This is where SchemaWiseAI comes in — a middleware tool designed to bridge the gap between generic AI outputs and the specific needs of your data infrastructure; currently in proof-of-concept stage. With SchemaWiseAI, you no longer need to manually adjust LLM-generated queries. The tool automatically transforms queries to match your exact data schema, saving time, reducing errors, and making data management easier.

What is SchemaWiseAI?

SchemaWiseAI is a middleware solution that adapts LLM-generated queries to match the unique database schemas of your organization. By ingesting your custom data structures, SchemaWiseAI ensures that every query is perfectly formatted and tailored to your needs, removing the need for manual adjustments. This powerful tool makes your data queries accurate, efficient, and easy to use, so you can focus on what matters most—getting insights from your data.

Why SchemaWiseAI?

LLMs can produce useful queries, but they often come with generic field names and structures that don’t fit your system. This mismatch requires tedious manual work to adapt each query to your specific data schema, causing unnecessary delays and increasing the chances of errors.

SchemaWiseAI solves this problem by automatically mapping field names and data structures to your custom schema. It makes sure that the queries generated by LLMs are accurate, efficient, and ready for execution in your environment, without the need for manual intervention.

Key Features of SchemaWise AI

  1. Field Name Mapping: Automatically converts generic field names from LLM-generated queries into your custom names.
  2. Query Transformation: Transforms AI-generated queries to fit your exact data schema.
  3. Template-Based Query Generation: Quickly generates queries using predefined templates that match your system.

Example

The current proof-of-concept (POC) version of SchemaWiseAI includes a network proxy mapping feature. Below is a snippet of this mapping, which shows how internal field names used within the organization (on the left) are automatically mapped to new field names. For example, proxy log data with specific field names like “srcip“, “dstip“, “status“, etc., is automatically transformed and mapped to standardized names such as “src“, “dst“, “http_status“, and so on.

"proxy_logs": {
            "fields": {
                "srcip": {"map_to": "src", "type": "string"},
                "dstip": {"map_to": "dst", "type": "string"},
                "bytes": {"map_to": "bytes_total", "type": "string"},
                "status": {"map_to": "http_status", "type": "string"},
                "dhost": {"map_to": "dest_host", "type": "string"},
                "proto": {"map_to": "protocol", "type": "string"},
                "mtd": {"map_to": "method", "type": "string"},
                "url": {"map_to": "uri", "type": "string"}
            }

Output

The final outcome of this schema transformation appears as follows:

User Prompt Request: List all HTTP GET requests with status 404 from the last hour

Using template query: sourcetype=”proxy” | where mtd=”GET” AND status=404 | stats count as request_count by url, srcip | sort -request_count

Final Query: sourcetype=”proxy” | where method=”GET” AND http_status=404 | stats count as request_count by uri, src | sort -request_count

For more transformation examples, check out Github.

Why Choose Ollama for SchemaWiseAI?

At the core of the current SchemaWiseAI is Ollama (https://ollama.com/), a powerful, local AI platform that runs models directly on your machine, ensuring security, privacy, and speed. Here’s why Ollama is the ideal platform for SchemaWiseAI:

  1. Privacy and Security: Run AI models locally, ensuring that your sensitive data remains secure.
  2. Customizable AI: Tailor the LLM to your specific database needs with ease.
  3. Real-Time Performance: No cloud latency, providing fast, on-demand query generation.
  4. Cost-Effective: Avoid high cloud processing costs by running everything on your own infrastructure.

To get started with Ollama, review my last post where I shared steps on how to install and configure Ollama on Kali.

Who Can Benefit from SchemaWiseAI?

SchemaWiseAI is designed for professionals who work with data and rely on accurate, fast, and customized queries. Key users include:

  1. Cybersecurity Analysts: Quickly generate and refine queries for security logs and threat detection.
  2. Data Engineers: Automate the process of adapting AI queries to fit specific database structures.
  3. Database Administrators: Ensure that all queries are properly aligned with custom schemas, reducing errors and failures.
  4. Business Intelligence Analysts: Easily generate optimized queries for reporting, dashboards, and insights.

Current Limitations

  1. Support for More LLMs: Expanding beyond Ollama to include platforms like OpenAI and other popular models.
  2. Integration with More Data Schemas: Supporting a wider range of schemas, such as Palo Alto logs, DNS logs, and Windows logs.
  3. Improved UX/UI: Enhancements to the user interface for a more intuitive experience.
  4. Expanded Query Optimization: More features to optimize queries for different platforms and use cases.
  5. To manage scalability limitations: take machine learning, pattern-based learning approach, or a hybrid approach.

Getting Started with SchemaWiseAI

Ready to give SchemaWiseAI a try? Follow these easy steps to get started listed on Github: https://github.com/azeemnow/Artificial-intelligence/tree/main/SchemaWiseAI

Conclusion: Transform Your Data Queries with SchemaWiseAI

SchemaWiseAI is the perfect solution for organizations looking to streamline their query generation process, improve query accuracy, and save time. Whether you’re a cybersecurity analyst, data engineer, or business intelligence analyst, SchemaWiseAI is designed to make working with data more efficient.

By automating the transformation of LLM-generated queries into organization-specific formats, SchemaWiseAI saves you the time and effort needed for manual adjustments. And with future features like broader LLM support, expanded schema integration, and improved user experience, SchemaWiseAI is positioned to become a game-changer in the world of data querying.

Disclosure:

Please note that some of the SchemaWiseAI code and content in this post were generated with the help of AI/Large Language Models (LLMs). The generated code and content has been carefully reviewed and adapted to ensure accuracy and relevance.

 

Tagged , , , , , ,

How to Install and Configure Ollama on Kali Linux

Install Ollama on Kali Linux

In the fast-growing world of artificial intelligence (AI), Ollama is becoming a popular tool for people who want to run powerful AI language models on their own computers. Instead of relying on cloud servers, Ollama lets you run AI models locally, meaning you have more privacy and control over your data. This guide will show you how to install and set up Ollama on Kali Linux so you can experiment with AI models right from your device.

What Is Ollama?

Ollama is a software framework that makes it easy to download, run, and manage large language models (LLMs) like LLaMA and other similar models on your computer. It’s designed for privacy and efficiency, so your data doesn’t leave your device. Ollama is getting more popular with developers and researchers who need to test AI models in a secure, private environment without sending data over the internet.

Why Use Ollama?

Ollama is gaining popularity for several reasons:

  • Privacy: Running models locally means your data stays on your device, which is crucial for people handling sensitive information.
  • Performance: Ollama is optimized to run on CPUs, so you don’t need a high-end graphics card (GPU) to use it.
  • Ease of Use: With simple commands, you can easily download and manage different AI models, making it accessible for beginners and advanced users alike.

Why Install Ollama on Kali Linux?

Kali Linux is a popular choice for cybersecurity professionals, ethical hackers, and digital forensics experts. It’s packed with tools for security testing, network analysis, and digital investigations. Adding Ollama to Kali Linux can be a big advantage for these users, letting them run advanced AI language models right on their own computer. This setup can help with tasks like analyzing threats, automating reports, and processing natural language data, such as logs and alerts.

By using Ollama on Kali Linux, professionals can:

  • Make Documentation Faster: AI models can help write reports, summaries, and other documents, saving time and improving consistency.
  • Automate Security Analysis: Combining Ollama with Kali’s security tools allows users to build scripts that look for trends, scan reports, and even identify potential threats.

Before You Begin Install

To get started with Ollama on Kali Linux, make sure you have:

  • Kali Linux version 2021.4 or later.
  • Enough RAM (at least 16GB is recommended for better performance).
  • sudo access on your system

Note: Ollama was initially built for macOS, so the setup on Linux may have some limitations. Be sure to check Ollama’s GitHub page for the latest updates.

Steps to Install Ollama on Kali Linux

Step 1: Update Your System

First, update your system to make sure all packages are up to date. Open a terminal and type:

sudo apt update && sudo apt upgrade -y

Install Ollama:

The official Ollama installation for Ubuntu or Debian-based systems is much simpler and usually involves running a curl command to download and execute an installation script:

curl -fsSL https://ollama.com/install.sh | sh
ollama_install_kali, Ollama AI Models

Verifying the Installation

ollama --version
Ollama Installation

You can also just enter ollama in the terminal and if its installed correctly you should see the following:

Run Language Models Locally

Installing and Running LLMs

The process for installing and running LLMs on Kali Linux is the same as on other Linux distributions:

To Install an LLM:

ollama pull <LLM_NAME>
Install LLMs on Linux

In my case above, I installed llama3.2:1b model. You can see full library of models available on Ollama’s Github.

Start Prompt

After you’ve completed the previous steps, you can start Ollama with the specific model that you installed and send your prompts:

ollama run <LLM_NAME>
start prompt ollama kali, AI for Cybersecurity Professionals

Conclusion

Ollama provides a great way to run large language models on your own machine, keeping data secure and private. With this guide, you can install and configure Ollama on Kali Linux and explore AI without relying on cloud-based services. Whether you’re a developer, AI enthusiast, or just curious about AI models, Ollama lets you experiment with language models directly from your device.

Stay tuned to the Ollama GitHub page for the latest features and updates. Happy experimenting with Ollama on Kali Linux!

AI-Policy-Development-Guide

I recently published a comprehensive guide for organizations developing an AI policy. It includes  key questions on AI governance, risk mitigation, compliance, and stakeholder engagement. You can find it on my Github: https://github.com/azeemnow/Artificial-intelligence/blob/main/AI-Policy-Development-Guide/AI-Policy-Development-Guide-v1.pdf

Disclosure: Some of the content in this blog post may have been generated or inspired by large language models (LLMs). Effort has been made to ensure accuracy and clarity.

Tagged , , , , , , , , , , , , ,
Advertisements