LLM
  • Overview
    • LLM
  • Key Concepts
    • Models
    • Key Concepts
  • Quckstart
    • Jan.ai
    • 🦙Ollama & Chatbox
  • Playground
  • Workflows
    • n8n
    • Flowise
      • Basic Chatbot
      • Research Agent
      • Company Information Agent
      • PDF Parser
      • Knowledge Base
      • Lead Gen
    • Agno
  • LLM Projects
    • RAG + ChainLit
  • Model Context Protocol
    • Claude + MCPs
    • Flowise + MCPs
  • Knowledge Graphs
    • neo4j
    • WhyHow.ai
  • Setup
    • WSL & Docker
    • 🦙Quickstart
    • Key Concepts
    • Workflows
    • Model Context Protocol
Powered by GitBook
On this page
  1. Model Context Protocol

Claude + MCPs

MCP: The USB-C Port for AI Applications ..

PreviousModel Context ProtocolNextFlowise + MCPs

Last updated 1 day ago

MCP Filesystem

At its core, the Filesystem MCP Server acts as a bridge between Claude and your computer's file system. It's a reference implementation, typically running as a Node.js process, that allows Large Language Models (LLMs) like Claude to perform various file operations securely.

Key capabilities include:

  • Reading and writing files: Accessing and modifying the content of specific files.

  • Directory management: Creating, listing, and deleting folders.

  • File manipulation: Moving files and directories.

  • Searching: Finding files within specified directories.

  • Metadata retrieval: Getting information about files (like size, creation time, modification time, permissions).

Crucially, access is sandboxed. You explicitly define which directories Claude can interact with during setup, ensuring security and control.

Why Use the Filesystem Server? Potential Use Cases

The ability for an AI to interact with local files opens up a world of possibilities. Here are just a few ideas, inspired by the video:

  1. Desktop Organization: Is your desktop a chaotic mess of screenshots, downloads, and random files? Ask Claude to analyze the contents and automatically sort them into relevant folders (e.g., Images, Videos, Documents, Projects). It can even create nested folder structures based on dates or types.

  2. File Management Automation: Perform bulk operations like creating multiple project directories from a template, renaming files based on patterns, or searching for specific content within files across designated folders.

  3. Intelligent Workflow Integration:

    • Project Management: Have Claude read project files, summarize progress, or even update task lists based on file contents.

    • Document Processing: Automate tasks like extracting information from batches of text files or archiving documents based on specific criteria.

    • Backup & Archiving: Set up routines for backing up important directories.

  4. Development & Testing:

    • Code Management: Ask Claude to read your codebase, summarize specific components, or suggest refactoring improvements.

    • Documentation: Generate or update README.md files based on the project structure and code comments. Claude can analyze your project and create comprehensive documentation sections like installation steps, usage examples, and key technologies.

    • Testing Environments: Potentially use it to set up or manage files within specific testing directories.

Imagine asking Claude:

"Organize my Desktop folder. Create folders for Images, Videos, Documents, and Projects, and move the relevant files into them."

"Read the README.md in my next.js project and add sections for Prerequisites, Installation, and Available Scripts based on the package.json file."


  1. Open the Filesystem MCP server.

  1. Check that you have npx installed.

npx --version
10.9.2
  1. If you haven't already, create:

C:\Users\[user_name]\AppData\Roaming\Claude\claude_desktop_config.json in a code editor.

  1. Add Filesystem Server Config.

You need to add an entry for the filesystem server within the mcpServers object.

{
    "version": "1.0.0",
    "settings": {
      "theme": "system",
      "language": "en",
      "notifications": {
        "enabled": true,
        "sound": true
      },
      "appearance": {
        "fontSize": 14,
        "lineHeight": 1.5,
        "maxWidth": 800
      },
      "behavior": {
        "autoUpdate": true,
        "startOnLogin": false,
        "minimizeToTray": true
      },
      "api": {
        "endpoint": "https://api.anthropic.com",
        "timeout": 30000
      }
    },
    "mcpServers": {
      "filesystem": {
        "command": "npx",
        "args": [
          "-y",
          "@modelcontextprotocol/server-filesystem",
          "C:\\"
        ]
      }
    }
  }
  1. Save the claude_desktop_config.json file.

  2. Quit the Claude Desktop application completely and relaunch it.

  1. Test: "How many files do I have in my C:\Temp directory?"

  1. Grant permission

MCP Data Analysis

MCP Data Analysis is your personal Data Scientist assistant, turning complex datasets into clear, actionable insights.

Setup

This section is for Reference only

  1. Install uv on Windows.

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
  1. Add: C:\Users\[username]\.local\bin to your PATH - edit path with your username

$env:Path = "C:\Users\[username]\.local\bin;$env:Path" 
  1. Check Python is installed.

python --version

MCP - Data Exploration

Build the MCP data-exploration server

  1. Clone: MCP--Data-Exploration GitHub repository.

git clone https://github.com/jporeilly/MCP--Data-Exploration.git
  1. Open the folder in Windsurf / Cursor

  2. Set up your virtual environment and install the dependencies.

python -m venv .venv
  1. Activate the virtual environment and install uv.

c:\MCP--Data-Exploration\.venv\Scripts\activate.ps1; uv pip sync pyproject.toml
  1. Build the MCP server.

uv build

Add the MCP server to the Cursor MCP configuration.

  1. In Cursor / Windsurf: File > Preferences > Cursor / Windsurf Settings.

  1. Copy and paste the following:

{
  "mcpServers": {
    "data-exploration": {
      "command": "uv",
      "args": [
        "--directory",
        "C:\\MCP--Data-Exploration\\src\\mcp_server_ds",
        "run",
        "mcp-server-ds"
      ]
    }
  }
}
  1. Restart the IDE.

The MCP server has 2 functions:

  • load_csv - loads a csv file into a DataFrame. If the df_name is not provided, the tool will automatically assign sequential df_1, df_2, and so on ..

  • run_script - executes python script that perform analytic tasks.


Kaggle

The data set from Kaggle is about student performance and behavior, containing real data of 5,000 student records. It includes key attributes such as students' attendance, assignment scores, quiz scores, study hours per week, stress level, and other factors for exploring patterns and insights relating to academic performance.

Looking at the data set in spreadsheet form reveals students' basic information, their quiz and test assignment scores, how much they study, their family background, and similar information that will be analyzed.

  1. Download the Student Performance & Behavior dataset (Mahmoud Elhemaly) from Kaggle.

  1. Create: MCP--Data-Exploration/data folder and unzip the downloaded archive.zip

  2. Extract into /data directory.

  3. View the Student_Grading_Dataset.csv


Initial Analysis

Time to test ..

As you know Cursor / Windsurf enable vibe coding, i.e chat using an LLM with the code or in this example the dataset.

  1. Let's create a prompt that will load the data and conduct some initial analysis.

# Comprehensive Student Performance & Behavior Analysis Framework

## Dataset Information
File path: C:\MCP--Data-Exploration\data\Students_Grading_Dataset.csv
Dataset contains student academic metrics and demographic information to analyze performance patterns and influential factors.

## Analysis Objectives
1. Conduct thorough exploratory data analysis to understand variable distributions and relationships
2. Identify key factors that strongly correlate with academic performance
3. Analyze how behavioral factors (study hours, sleep, stress) impact academic outcomes
4. Examine demographic influences on student achievement
5. Determine which interventions might most effectively improve student outcomes

## Analysis Steps

### 1. Data Loading and Initial Inspection
- Import dataset and verify structure
- Review basic statistics and data types
- Identify missing values and potential data quality issues
- Generate summary statistics for numerical variables

### 2. Data Preprocessing
- Address missing values with appropriate methods
- Transform categorical variables as needed
- Create any helpful derived features
- Standardize numerical variables if needed

### 3. Exploratory Data Analysis
- Analyze distributions of all performance metrics
- Create correlation matrix to identify relationships between variables
- Compare grade distributions across departments and demographic groups
- Visualize relationships between behavioral factors and performance

### 4. Key Analysis Areas
- Investigate predictors of Total_Score and Grade
- Analyze impact of study habits on academic performance
- Examine stress-performance relationship
- Evaluate socioeconomic factors' influence (family income, parent education)
- Assess attendance patterns and their correlation with achievement

### 5. Statistical Analysis
- Perform significance testing between different student groups
- Conduct regression analysis to identify performance predictors
- Calculate effect sizes for meaningful relationships
- Consider clustering techniques to identify student archetypes

### 6. Insights and Recommendations
- Identify key success factors for high-performing students
- Highlight potential intervention areas for improved outcomes
- Suggest data-driven approaches for academic improvement
- Address limitations and potential biases in the analysis

## Special Considerations
- Ensure student privacy by properly anonymizing personal information
- Account for potential dataset biases
- Consider confounding variables before making causal claims
- Document limitations of both dataset and analysis methods
- Provide actionable recommendations backed by statistical evidence

Claude Desktop

Using an IDE Cursor / Windsurf is great for development / testing.

The next step is to connect the MCP to a client environment - Claude Desktop - to conduct a more comprehensive analysis.

  1. Run the setup_cluade_windows.py.

python setup_claude_windows.py
  1. Restart Claude Desktop > Click on the Tools icon

  1. Enter the following prompt.

I'm a data scientist and would like to analyze student performance and behavior. Dataset Information

File path: C:\MCP--Data-Exploration\data\Students_Grading_Dataset.csv 
Dataset contains student academic metrics and demographic information to analyze performance patterns and influential factors.

Analysis Objectives
Conduct thorough exploratory data analysis to understand variable distributions and relationships
Identify key factors that strongly correlate with academic performance
Analyze how behavioral factors (study hours, sleep, stress) impact academic outcomes
Examine demographic influences on student achievement
Determine which interventions might most effectively improve student outcomes
  1. Accept the various MCP server requests to locate, load and analyze the dataset.

Claude will run through a number of tasks and generate a number of assets .

Streamlit Dashboard

Streamlit is a Python library that enables data scientists and developers to quickly build and share custom web applications without any front-end experience. It's specifically designed for machine learning and data science projects, allowing you to transform data scripts into shareable web apps in just a few minutes.

A streamlit dashboard allows you to:

  • Filter data by department, gender, and grade.

  • Visualize grade distributions.

  • Explore means of numeric variables by grade.

  • See categorical breakdowns by grade.

  • View a correlation matrix for numeric variables.

  • Optionally view the raw filtered data.

  1. Test streamlit with test_streamlit.py dashboard.

streamlit run test_streamlit.py --server.enableCORS=false --server.enableXsrfProtection=false --server.enableStaticServing=true --server.headless=true
The dashboard should run with:
CORS disabled
XSRF protection disabled
Static serving enabled
Headless mode enabled (for stateless operation)

You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501 Network URL: http://192.168.1.2:8501 External URL: http://185.188.40.35:8501

  1. Run the Student Dashboard.

streamlit run student_dashboard.py --server.enableCORS=false --server.e
nableXsrfProtection=false --server.enableStaticServing=true --server.headless=true

MCP Research

Your personal Research Assistant, turning research questions into comprehensive, well-cited reports.

Setup

This section is for Reference only

  1. Install uv on Windows.

powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
  1. Add: C:\Users\[username]\.local\bin to your PATH - edit path with your username

$env:Path = "C:\Users\[username]\.local\bin;$env:Path" 
  1. Check Python is installed.

python --version

MCP - Data Exploration

The deep research MCP server provides powerful research capabilities:

  1. You can ask it to research any topic with configurable:

    • Depth (1-5): How many levels deep it goes in following research paths

    • Breadth (1-5): How many parallel research directions it explores at each level

  2. For each research query, it:

    • Generates targeted search queries

    • Evaluates source reliability

    • Extracts key learnings

    • Generates follow-up questions

    • Creates a final research report

  3. The server provides real-time progress updates as it:

    • Explores different research paths

    • Processes search results

    • Evaluates sources

    • Compiles findings

  1. Clone: MCP--Deep Research GitHub repository.

git clone https://github.com/jporeilly/MCP--Deep-Research.git

x

x

x

x

x

x

x

How does it work?

Without going through every line of code .. at its heart lies a sophisticated automated research system that intelligently explores topics, evaluates sources, and synthesizes findings.

Directory Structure

Here's the main file structure and their roles:

Core Research Files:

src/deep-research.ts : Main research algorithm implementation

  • Contains:

    deepResearch
    generateSerpQueries
    processSerpResult
    writeFinalReport
  • Handles: Research tree traversal, query generation, result processing

Server and Interface:

  • src/mcp-server.ts : MCP server implementation

    • Exposes: The 'deep-research' tool

    • Handles: Server setup, tool definition, progress notifications

Search and Processing:

  • src/search-adapter.ts : Search functionality interface

  • src/web-search.ts : Web search implementation

  • src/ai/text-splitter.ts : Text processing utilities

  • src/ai/providers.ts : AI model configurations and providers

Configuration and Setup:

  • src/config.ts : System configuration

  • src/prompt.ts : AI prompt templates

Progress and Output Management:

  • src/output-manager.ts : Handles output formatting and logging

  • src/progress-manager.ts : Manages research progress tracking

  • src/feedback.ts : Handles user feedback

Main Entry Point:

  • src/run.ts : Main application entry point

    • Orchestrates the research process

    • Handles metrics and progress tracking

AI Components:

  • src/ai/observability.ts : AI operation monitoring

  • src/ai/providers.ts : AI model configurations

  • src/ai/text-splitter.ts : Text processing utilities

  • src/ai/text-splitter.test.ts : Tests for text processing

x

x

x

  1. Query Generation and Research Direction

typescriptCopyInsertasync function generateSerpQueries({
  query,
  numQueries = 3,
  learnings,
  learningReliabilities,
  researchDirections = []
})
  • Takes an initial query and generates multiple search queries

  • Uses previous learnings and their reliability scores to guide new queries

  • Prioritizes research directions based on:

    • High reliability findings (>= 0.7) for deeper exploration

    • Low reliability findings (< 0.7) for verification

    • Explicit research directions with priority scores

  • Source Evaluation

typescriptCopyInserttype SourceMetadata = {
  url: string;
  title?: string;
  publishDate?: string;
  domain: string;
  relevanceScore?: number;
  reliabilityScore: number;
  reliabilityReasoning: string;
};
  • Evaluates each source's reliability

  • Tracks metadata including:

    • Domain credibility

    • Publication date

    • Relevance to query

    • Reasoning for reliability score

  • Research Tree Traversal

typescriptCopyInsertexport type ResearchProgress = {
  currentDepth: number;
  totalDepth: number;
  currentBreadth: number;
  totalBreadth: number;
  currentQuery?: string;
  parentQuery?: string;
  totalQueries: number;
  completedQueries: number;
  learningsCount?: number;
  learnings?: string[];
  followUpQuestions?: string[];
};

The algorithm traverses the research space as a tree where:

  • Depth: Represents how many levels of follow-up questions to explore

  • Breadth: How many parallel queries to investigate at each level

  • Each node tracks:

    • Parent query relationship

    • Current progress

    • Learnings at this node

    • Generated follow-up questions

  • Learning Aggregation

typescriptCopyInserttype LearningWithReliability = {
  content: string;
  reliability: number;
};
  • Combines findings from multiple sources

  • Weights learnings by source reliability

  • Deduplicates similar findings

  • Maintains provenance of information

  • Concurrency and Rate Limiting

typescriptCopyInsert// Configurable concurrency limit
const ConcurrencyLimit = 2;
  • Uses concurrent processing for parallel research paths

  • Implements rate limiting for API calls

  • Handles timeouts (15 seconds per query)

  • Manages errors gracefully

  • Final Report Generation

typescriptCopyInsertasync function writeFinalReport({
  prompt,
  learnings,
  sourceMetadata,
})
  • Synthesizes all findings into a coherent report

  • Includes:

    • Original research query

    • Key learnings

    • Source citations

    • Reliability metrics

The algorithm is particularly sophisticated in how it:

  1. Adapts Research Direction: Uses reliability scores to decide whether to dig deeper or seek verification

  2. Maintains Context: Tracks the relationship between queries and findings

  3. Balances Resources: Uses configurable depth/breadth parameters to manage computational resources

  4. Ensures Quality: Evaluates and weights sources for reliability

x

MCP - MongoDB

MongoDB MCP (Model Context Protocol) server connects large language models directly to MongoDB databases through natural language.

It functions as a bridge that enables AI assistants like Claude to query collections, explore database schemas, and analyze data without needing to write traditional MongoDB queries.

The server primarily provides read-only access for security purposes, though some implementations support write operations when configured with appropriate permissions.

Setup

  1. Check that you have npx installed.

    npx --version
    10.9.2
  2. Clone: MCP--MongoDB GitHub repository.

git clone https://github.com/jporeilly/MCP--MongoDB.git
  1. Deploy the MongoDB Docker container in Docker Desktop.

cd \
cd .\MCP--Mongodb
docker-compose up -d

You can check that the mcp-mongodb-1 container is up and running:

  • docker ps | findstr mongodb

  • Portainer: admin / Portainer123

  1. Test connection to MongoDB.

python .\retail_connection.py
Successfully connected to MongoDB.
Server version: 8.0.9
Server uptime: 465 seconds
  1. Seed with sample data and verify.

python .\populate_db.py
Connected to MongoDB
Connected to MongoDB
Inserted 50 users
Inserted 100 products
Inserted 1111 reviews
Inserted 1000 orders
Created indexes

Database summary:
Users: 50
Products: 100
Orders: 1000
Reviews: 1111
  1. In Windsurf: File > Preferences > Cursor / Windsurf Settings.

  2. To Add the MCP MongoDB server in Windsurf, click on Add Server.

  1. Select MongoDB

  1. Enter the following connection string & Save.

mongodb://root:example@localhost:27017/?authSource=admin&directConnection=true
  1. Restart Windsurf.


Cursor

  1. Copy and paste the following:

  2. Copy and paste the following:

  1. Restart

Initial Analysis

Here's a bunch of questions we can ask our MongoDB Analytics Database.

  • Check that the mongodb-1 container is up and running.

  • Check that Windsurf / Cursor IDE has connected to the MCP MongoDB server.

Database Overview:
"What databases exist in the MongoDB instance?"
"What collections are available in a specific database?"
"What's the storage size and statistics of a particular database?"

Collection Analysis:
"What's the schema of a specific collection?"
"How many documents are in a collection?"
"What indexes exist on a collection?"

Data Exploration:
"Find documents matching specific criteria"
"Run aggregation pipelines for data analysis"
"Get sample documents from a collection"

System Information:
"View recent MongoDB logs"
"Check startup warnings"

Here are some more complex analytical questions we could explore using the MongoDB MCP server's capabilities:

Data Analysis:
"Find all documents where field X is greater than average value across the collection"
"Get the top N most frequent values in a field, grouped by another field"
"Find documents with array fields containing specific patterns or combinations"

Performance Analysis:
"Analyze query performance using explain() on complex aggregation pipelines"
"Find collections with missing indexes based on query patterns"
"Identify the largest collections by storage size and document count"

Complex Aggregations:
"Calculate moving averages over time-series data"
"Perform multi-stage aggregations with lookups across collections"
"Create statistical summaries with standard deviation and percentiles"

Claude Desktop

Great so we've connected to the sample 'analytics' database.

Let's now connect a client - Claude Desktop - for further in depth analysis.

x

xx

  1. Run the following setup script

python setup_claude_windows.py

x

x

x

React Dashboard

  1. Check mongodb-1 container is up and running

docker ps
  1. In a new Terminal, start the Backend server.

cd \
cd MCP--MongoDB\acme-retail-dashboard\backend
node .\server.js
PS C:\MCP--MongoDB\acme-retail-dashboard\backend> node .\server.js
Server running in development mode on port 5000
API available at http://localhost:5000/api
  1. In another Terminal start the Frontend.

cd \
cd MCP--MongoDB\acme-retail-dashboard\frontend
npm start

Compiled successfully!

You can now view frontend in the browser.

Local: http://localhost:3000 On Your Network: http://192.168.1.2:3000

Note that the development build is not optimized. To create a production build, use npm run build.

x

x

if you don't have Python for Windows installed:

Open Deep Research is an experimental, fully open-source research assistant that automates deep research and produces comprehensive reports on any topic. It features two implementations - a and a multi-agent architecture - each with distinct advantages. You can customize the entire research and writing process with specific models, prompts, report structure, and search tools.

if you don't have Python for Windows installed:

https://www.python.org/downloads/
workflow
https://www.python.org/downloads/
Find Open Datasets and Machine Learning Projects | KaggleKaggle
Download dataset
Logo
Use Cases
claude_desktop_config.json - using cursor
Access to C:\ drive
MCP - filesystem
List files in C:\Temp directory
Student Performance Analysis Dashboard
Check python
Windsurf
Cursor - Add MCP server configuration
Windsurf - Add MCP server configuration
MCP server - Cursor
MCP server - Windsurf
Student_Grading_Dataset.csv
Windsurf - Analysis
Check MCP servers
Analysis assets
React Dashboard
Student Dashboard
Deep Research
MCP MongoDB server
MongoDB container
Add MCP server
Add MongoDB
Windsurf cascade
List Databases
Moving averages over 7 day period
Desktop Docker - mongodb-1 container
https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem