This agent analyzes tokenized land NFTs to provide on-chain land intelligence using LLMs and geospatial data.
- β Detect land overlaps using geospatial data from existing NFTs
- β Fetch nearby amenities (e.g. schools, hospitals, roads) using OpenStreetMap's Overpass API
- β Suggest land use-cases based on surrounding infrastructure (e.g. good for estate, farming, or commercial use)
- β Generate human-readable reports using LLMs like Qwen2.5 via Ollama
- β Suggest field visit itineraries based on relevant POIs around a land plot
This agent helps real estate investors, land buyers make informed decisions about land assets stored as NFTs.
- π§± ERC-1155 Land NFTs w/ metadata stored on IPFS
- π°οΈ OpenStreetMap API (Overpass) for nearby infrastructure
- π§ Qwen2.5 LLM served via Ollama
- βοΈ Mastra Agent Framework
- π₯ Firebase for storage and OpenSea link tracking
Here are the Setup Instructions for your Land Intelligence AI Agent β perfect for your README.md, Docker Hub, or Nosana submission:
This guide shows you how to run the Land Intelligence AI Agent locally or in Docker, using an LLM served by Ollama.
- Node.js 20+
- pnpm
- Docker
- (Optional) Ollama installed locally
git clone https://github.com/jnationj/agent-challenge.git
cd agent-challenge- Start Ollama and pull model:
ollama serve
ollama pull qwen2.5:1.5b
ollama run qwen2.5:1.5b- Set up environment:
Create a .env file:
API_BASE_URL=http://localhost:11434/api
MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b- Install dependencies and run:
pnpm install
pnpm run devThe Mastra playground will be available at http://localhost:8080
Runs Ollama and the agent in the same container.
- Build the image:
docker build -t jnationj/agent-challenge:latest .- Run the container:
docker run --env-file .env.docker -p 8080:8080 jnationj/agent-challenge:latestThe agent will automatically:
- Start Ollama
- Pull the Qwen model
- Start the agent on port 8080
- Create a
nosana_agent.jsonwith:
{
"ops": [
{
"id": "agent",
"args": {
"gpu": true,
"image": "docker.io/jnationj/agent-challenge:latest",
"entrypoint": ["/bin/sh"],
"cmd": ["-c", "/start.sh"],
"expose": [{ "port": 8080 }]
},
"type": "container/run"
}
],
"meta": {
"trigger": "dashboard",
"system_requirements": {
"required_vram": 4
}
},
"type": "container",
"version": "0.1"
}- Go to https://dashboard.nosana.io and submit the job.
To get started run the following command to start developing: We recommend using pnpm, but you can try npm, or bun if you prefer.
pnpm install
pnpm run dev- Clone the Land AI Agent
- Install dependencies with
pnpm install - Run the development server with
pnpm run dev - Build your agent using the Mastra framework
Here we will describe the steps needed to build land ai agent.
This repo, there is the Land Agent.
This is a working agent for our submission that allows a user to chat with an LLM, and fetches real amenities data for the provided land coordinate (Real world case plot of land).
There is main folders we need to pay attention to:
In src/mastra/agents/land-agent/ you will find a complete submission of a working agent. Complete with Agent definition, API calls, interface definition, basically everything needed to get a full fledged working agent up and running.
We also provided the src/mastra/agents/land-agent/land-workflow.ts file. This file contains how you can chain agents and tools to create a workflow, in this case, the user provides their 4 coordinate ploygon, and the agent retrieves the a boolean overlaps or not, basic amenities and land use suggestion for the location, and suggests an itinerary for example you want to check if there are schools areound the property etc
Agents depend on an LLM to be able to do their work.
You can use the following endpoint and model for testing, if you wish:
MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b
API_BASE_URL= http://127.0.0.1:11434/api
The default configuration uses a local Ollama LLM.
For local development or if you prefer to use your own LLM, you can use Ollama to serve the lightweight qwen2.5:1.5b mode.
Installation & Setup:
-
Start Ollama service:
ollama serve- Pull and run the
qwen2.5:1.5bmodel:
ollama pull qwen2.5:1.5b
ollama run qwen2.5:1.5b- Update your
.envfile
There are two predefined environments defined in the .env file. One for local development and another, with a larger model, qwen2.5:32b, for more complex use cases.
Why qwen2.5:1.5b?
- Lightweight (only ~1GB)
- Fast inference on CPU
- Supports tool calling
- Great for development and testing
Do note qwen2.5:1.5b is not suited for complex tasks.
The Ollama server will run on http://localhost:11434 by default and is compatible with the OpenAI API format that Mastra expects.
| Variable | Description | Example |
|---|---|---|
API_BASE_URL |
URL for the Ollama API | http://127.0.0.1:11434/api (inside container) |
MODEL_NAME_AT_ENDPOINT |
The LLM model to load via Ollama | qwen2.5:1.5b |
PORT (optional) |
Port your agent will run on (defaults to 8080) | 8080 |
API_BASE_URL=http://127.0.0.1:11434/api
MODEL_NAME_AT_ENDPOINT=qwen2.5:1.5b
PORT=8080You can pass this file to Docker like:
docker run --env-file .env.docker -p 8080:8080 jnationj/agent-challenge:latestAPI_BASE_URL: Needed so the agent knows where to call Ollama for LLM responsesMODEL_NAME_AT_ENDPOINT: Used inollama pullto fetch the correct modelPORT: Tells Mastra or your server which port to bind to (if your app supports it)
You can read the Mastra Documentation: Playground to learn more on how to test your agent locally. Before deploying your agent to Nosana, it's crucial to thoroughly test it locally to ensure everything works as expected. Follow these steps to validate land agent:
Local Testing:
- Start the development server with
pnpm run devand navigate tohttp://localhost:8080in your browser - Test your agent's conversation flow by interacting with it through the chat interface
- Verify tool functionality by triggering scenarios that call your custom tools
- Check error handling by providing invalid inputs or testing edge cases
- Monitor the console logs to ensure there are no runtime errors or warnings
Docker Testing: After building your Docker container, test it locally before pushing to the registry:
# Build your container
docker build -t jnationj/agent-challenge:latest .
# Run it locally with environment variables
docker run -p 8080:8080 --env-file .env.docker jnationj/agent-challenge:latest
# Test the containerized agent at http://localhost:8080Ensure your agent responds correctly and all tools function properly within the containerized environment. This step is critical as the Nosana deployment will use this exact container.
- container URL for our land-agent submission https://hub.docker.com/r/jnationj/agent-challenge
- Deploy land-agent Docker container on Nosana
- Land-agent successfully ran on the Nosana network
- Include the Nosana job ID or deployment link
- Dashboard
- Deployment Link
We have included a Nosana job definition at <./nos_job_def/nosana_agent.json>, that you can use to publish land agent to the Nosana network.
A. Deploying using @nosana/cli
- Published docker image to the
imageproperty."image": "docker.io/jnationj/agent-challenge:latest" - Download and install the @nosana/cli
- Load your wallet with some funds
- Retrieve your address with:
nosana address - Go to our Discord and ask for some NOS and SOL to publish your job.
- Retrieve your address with:
- Run:
nosana job post --file nosana_agent.json --market nvidia-4090 --timeout 30 - Go to the Nosana Dashboard to see your job
B. Deploying using the Nosana Dashboard
- Make sure you have https://phantom.com/, installed for your browser.
- Go to our Discord and ask for some NOS and SOL to publish your job. OR
- Fund it yourself by buying NOS from exchange Gate or MEXC
- Click the
Expandbutton, on the Nosana Dashboard - Copy and Paste
<./nos_job_def/nosana_agent.json>Nosana Job Definition file into the Textarea - Choose an appropriate GPU for the AI model that you are using
- Click
Deploy
π§ͺ Example Usage Once the agent is running (locally, in Docker, or on Nosana), you can interact with it using the Mastra Agent Playground or via API.
β 1. Open the Playground Visit the running agent in your browser:
http://localhost:8080
You'll see the Mastra AI Playground where you can interact with the Land Intelligence Agent using plain language.
β 2. Example Prompts Try asking:
Analyze this land:
Sure! Continuing the **Example Usage** section:
---
Analyze this land:
[
[52.3667, 4.8945],
[52.3668, 4.8955],
[52.3658, 4.8956],
[52.3657, 4.8946]
]
Is this land already registered?
[
[4.81790, 7.00644],
[4.81780, 7.00622],
[4.81765, 7.00636],
[4.81775,Β 7.00658],
]
The agent will:
- β Check for overlap with existing land NFTs on-chain
- β Fetch nearby amenities (e.g. hospitals, schools, roads) within 500m
- β Suggest possible land use-cases (e.g. estate, farm, market)
Short video demo 4mins: https://youtu.be/epZ8IRD0J3Y
