AI Humans: Alpaca Network + Modelz.io

Published on
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Audio] ALPACA NETWORK GATEWAYZ AI HUMANS INC POWERS THE AI ECONOMY. BUY INFERENCE. TRADE MODEL TOKENS. ACCELERATE AI RESEARCH. www.alpacanetwork.ai [email protected] www.modelz.io www.gatewayz.io.

Scene 2 (25s)

[Audio] VALUE CAPTURE IN AI IS BACKWARDS Open source AI creates the utility, while closed APIs capture the economic upside. Creators publish weights, apps drive usage, but none of them own the revenue stream the inference generates. There's no native, programmable way to route usage fees back to the people building and using the model, so the spread between value created and value captured keeps widening..

Scene 3 (55s)

[Audio] ChatGPT Claude Sonnet 4 Open Router AI model inference compute doubling every 3.4 months 80% of total AI compute dollars spent on inference, not training. Source: OpenAI, Roland Berger. https://www.rolandberger.com/en/Insights/Publications/Empowering-Telecoms-with-Gen-AI.html.

Scene 4 (1m 18s)

[Audio] 2M+ OPEN SOURCE MODELS ON HUGGING FACE DOUBLING EVERY 6 MONTHS More models, means AI does more tasks & needs more inference. Source: Hugging Face Model Atlas https://huggingface.co/spaces/Eliahu/Model-Atlas.

Scene 5 (1m 32s)

[Audio] UNIFIED, TOKENIZED AI INFRASTRUCTURE We believe the future of AI should be decentralized and user owned. GATEWAYZ One API to run any model. Turn any open-source model into a token with automatic on-chain revenue sharing Inference calls automatically distribute fees to token holders. (ERC-7641) where holders coown model inference revenues..

Scene 6 (1m 58s)

[Audio] MODELZ.IO MAKES IT POSSIBLE TO CO-OWN AI MODELS VIA SMART CONTRACT INITIAL MODEL MODEL INFERENCE OFFERING OWNERSHIP ASSET ERC-20 Backwards ERC-7641 ERC-7007 Compatibile + = Vetting & Due Diligence AI Research Accelerator Revenue Sharing Modelz offers revenuesharing for token holders to benefit from long-term success. Models must have a hugging face page and permissible open source license. Fund and launch AI models enabling AI researchers & developers to raise capital to continue to build open source models & agents. https://ethereum-magicians.org/t/erc-7641-intrinsic-revshare-token/18999.

Scene 7 (2m 45s)

[Audio] GATEWAYZ IS CHEAP, SIMPLE & ON-CHAIN INFERENCE WINS Decentralized Modelz.io Centralized APIs (Bittensor, Gensyn) & Gatewayz.io (OpenAI, Athropic) FOR DEVELOPERS Tokenized, revenue MODEL Closed, proprietary Open-source, but no revenue-sharing sharing One API. Any model. No overhead or lock-in 2–3% flat (subsidized by FEES High + markup Variable (set by node operators) trading fees) FOR MODEL CREATORS MULTI-PROVIDER Earn from every API call, ❌ Locked to vendor infra ❌ No unified routing ✅ Yes — decentralized routing keep open access ACCESS ✅ Ongoing on-chain CREATOR ❌ No revenue from usage ❌ No on-chain revenue FOR ECOSYSTEM share payouts INCENTIVES Marketplace dynamics, not another silo DEVELOPER LOCK-IN API keys & vendor accounts Node-specific setup Wallet-based, portable.

Scene 8 (3m 53s)

[Audio] GATEWAYZ INFERENCE SUBSIDIZED BY TRADING FEES 1 4 5 2 TRADING FEES ($) INFERENCE REVENUE ($) 3 6 4 1 TRADING VOLUME (MODEL TOKEN) MODEL INFERENCE (PROTOCOL REVENUE) BUY/BURN TOKENS (REDUCED SUPPLY) 5 2 TRADING FEES (PROTOCOL REVENUE) 3 SUBSIDIZE MODEL INFERENCE 6 DRIVE DEMAND (MODEL TOKEN).

Scene 9 (4m 21s)

[Audio] TRACTION Trading Fees & Protocol Revenue Paca Trading Fees - $25K MRR Modelz Trading Fees $15K MRR Community Raised USD $1.2M in 2 weeks, no discounts ($.0025 per token) 30K+ Subscribers, 2K+ Holders.

Scene 10 (4m 46s)

[Audio] $1,000,000 ARR by Jan 2026 (projected) 19% growth (MoM).

Scene 11 (4m 55s)

[Audio] MILESTONES BASED PLAN 2025 H1 2025 H2 2026 H1 2026 H2 SDK + Integrations Scale & Economics Token Launch + Modelz.io Beta Modelz.io + Inference Gateway Raised 1.2M+ with launch Launch and scale Modelz. ✅ Building Modelz.io TypeScript/Python SDK + 25–50 app integrations; automated on-chain rev-share to 25+ models. 100 integrated apps; 100+ tokenized models with active inference; fee-subsidy loop running (trading fees → cheaper inference). Ship Inference Gateway, 10 paying pilots, $50– 100k MRR exit-rate ⚙ MRR target: $150–250k; blended gross margin ≥ 35%. Budget band: ~$0.8M. MRR target: $300k+; payback < 12 months Budget band: ~$0.8M. Resourcing: +2 engineers, 1 solutions/BD. Budget band: ~$0.9M. SPEND UNLOCKS MILESTONES; MILESTONES UNLOCK NEXT RAISE..

Scene 12 (6m 24s)

[Audio] FOUNDING TEAM JOAQUIM MIRO TEAM MEMBERS VAUGHN DIMARCO MICHAEL GORD CEO & Core Contributor CTO & Core Contributor Core Contributor 7 FTEs 80+ years of AI and Web3 experience 5 contractors and 6 parttime as of Q2 2025. 8 top advisors from web3 and AI industries. Serial entrepreneur, investor & partner in multiple $100M+ valuation businesses + 1 exit VC Partner @ GDA Capital & Fintech Advisor at Holt Accelerator Advisor: Techstars, Founder Fuel, Founders Institute, Outlier Ventures. Co-Founder, $SWAP ($400M MCAP) Head of AI at top engineering consulting firm in Canada Working in AI and data science since 2012 as ML consultant Co-founded AI holding company & search fund acquiring AI startups & legacy software companies Serial entrepreneur + 1 exit Mechnical Engineer Serial entrepreneur (3 exits, 4 acquisitions), investor and advisor (4 unicorns), financier (70M+) + speaker (100+ conferences) Advisor / Board of Director to various accelerators including: Chamber of Digital Commerce, Founders Institute, Brinc, Techstars.

Scene 13 (8m 1s)

[Audio] Operations RAISING $2.5M (SAFE) 15% Tech & Engineering AI Humans Inc. (Canadian Entity) Optional Token Warrant 35% 10% share of inference market Go-To-Market 20% Use of Funds $875,000 Tech & Engineering $750,000 Token & Treasury $500,000 Go-To-Market $375,000 Operations Token Treasury 30%.

Scene 14 (8m 35s)

[Audio] STRATEGIC COMPUTE PARTNERSHIPS POWERING THE DECENTRALIZED AI INFERENCE LAYER FOCUS STRATEGIC VALUE High-performance, scalable compute for large model inference Decentralized Phyiscal infrastructure (DePIN) Distributed cloud compute marketplace Flexible, on-demand compute capacity across multiple geos Web3 infrastructure & Secure routing and integration interoperability layer for inference requests Serverless AI Specialized inference optimization for open-source LLMs compute platform.

Scene 15 (9m 11s)

[Audio] Projects & Partners. Projects & Partners.

Scene 16 (9m 17s)

[Audio] THANK YOU www.alpacanetwork.ai [email protected].