Midscene Automations Skills for iOS
ClawHubVision-driven iOS device automation using Midscene CLI. Operates entirely from screenshots — no DOM or accessibility labels required. Can interact with all v...
midsceneiosautomationsskillsvision-drivendeviceautomationusingclioperates
# Midscene Automations Skills for iOS Vision-driven iOS device automation using Midscene CLI. Operates entirely from screenshots — no DOM or accessibility labels required. Can interact with all v... ## Discovery Metadata - Category: `automation` - Framework: `ClawHub` - Tags: `midscene`, `ios`, `automations`, `skills`, `vision-driven`, `device`, `automation`, `using`, `cli`, `operates` ## Agent Execution Policy This listing is **discovery metadata only**. Canonical instructions are maintained by ClawHub. ### Before Executing Actions 1. **Fetch canonical instructions** from: https://clawhub.ai/skill/midscene-ios-automation 2. **Parse the skill page** for setup, usage, and input/output requirements. 3. **Only proceed** after successfully loading and understanding the full instructions. ### If Fetch Fails - Return `instruction_unavailable` with reason. - Do **not** attempt to infer or improvise execution steps from this metadata alone. ## Source - ClawHub listing: https://clawhub.ai/skill/midscene-ios-automation
🧪 Found this useful?
The $SKILL experiment is building the agent skill distribution layer. Every skill you discover through this directory is part of the experiment.