Read Meter or Dial
Automatically read the numerical values of a meter or dial.

eyepop.describe.read-dial:latest
Prompt
Analyze the provided image of a meter or dial and extract the current numerical reading.
INSTRUCTIONS:
1. LOCATE THE INDICATOR TIP: Find the long, thin, pointed end of the indicator needle.
2. LOCATE THE...
...Run the full prompt in your EyePop.ai dashboard
Input
Image
Output
Text
Image size
1000X1000
Model type
EyePop.ai VLM
How It Works
As its name implies, the Describe Image task on the Abilities tab does exactly that: it generates a detailed text description of any input image. This is a highly versatile tool because you can use it as a foundational building block alongside other abilities, or use it on its own to get an understanding of a scene without needing to view the image directly.
We can use the Describe Image task in order to automatically read the numerical values of a meter or dial. By automating this visual inspection, businesses can digitize their instruments, standardize the way they collect data, and reduce human error on the factory floor.
With the ability, it should return a numerical reading of the meter or dial. For example, the output of this image returns: 4.

SDK Tutorial
First, let’s define the ability:
from eyepop import EyePopSdk
from eyepop.data.data_types import InferRuntimeConfig, VlmAbilityGroupCreate, VlmAbilityCreate, TransformInto
from eyepop.worker.worker_types import CropForward, ForwardComponent, FullForward, InferenceComponent, Pop
import json
ability_prototypes = [
VlmAbilityCreate(
name=f"{NAMESPACE_PREFIX}.describe.read-dial",
description="Read a dial/meter",
worker_release="qwen3-instruct",
text_prompt=dial_prompt,
transform_into=TransformInto(),
config=InferRuntimeConfig(
max_new_tokens=150,
image_size=1000
),
is_public=False
)
]
The prompt we can use here is:
"Analyze the provided image of a meter or dial and extract the current numerical reading.
INSTRUCTIONS:
1. LOCATE THE INDICATOR TIP: Find the long, thin, pointed end of the indicator needle.
2. LOCATE THE..."
Next, we can actually create the ability with the following code:
with EyePopSdk.dataEndpoint(api_key=EYEPOP_API_KEY, account_id=EYEPOP_ACCOUNT_ID) as endpoint:
for ability_prototype in ability_prototypes:
ability_group = endpoint.create_vlm_ability_group(VlmAbilityGroupCreate(
name=ability_prototype.name,
description=ability_prototype.description,
default_alias_name=ability_prototype.name,
))
ability = endpoint.create_vlm_ability(
create=ability_prototype,
vlm_ability_group_uuid=ability_group.uuid,
)
ability = endpoint.publish_vlm_ability(
vlm_ability_uuid=ability.uuid,
alias_name=ability_prototype.name,
)
ability = endpoint.add_vlm_ability_alias(
vlm_ability_uuid=ability.uuid,
alias_name=ability_prototype.name,
tag_name="latest"
)
print(f"created ability {ability.uuid} with alias entries {ability.alias_entries}")
That’s it! To run the prompt against an image here is some sample evaluation code:
from pathlib import Path
pop = Pop(components=[
InferenceComponent(
ability=f"{NAMESPACE_PREFIX}.describe.read-dial:latest"
)
])
with EyePopSdk.workerEndpoint(api_key=EYEPOP_API_KEY) as endpoint:
endpoint.set_pop(pop)
sample_img_path = Path("/content/sample_img.png")
job = endpoint.upload(sample_img_path)
while result := job.predict():
print(json.dumps(result, indent=2))
print("Done")
After running the evaluation you can see what the model described and compare it to your source of truth. With this, you can improve your prompts and thus improve your accuracy.
Get early access
Want to move faster with visual automation? Request early access to Abilities and get notified as new vision capabilities roll out.