How HeadlessProfile data is wired together.
This page is the public source-of-truth map for the directory. It explains what hns.bio is, how our headlessprofile fork extends it, where the directory's data actually comes from, and which fields look like DNS TXT records but are not.
/01Two things named "hns.bio"
It helps to keep these separate, because conflating them leads to wrong conclusions.
| Term | What it is | Where it lives |
|---|---|---|
| hns.bio (the standard / website) | Open spec for representing identity on Handshake TLDs via DNS TXT records (pfp:, link:, x:, btc:, …). Also a live website that resolves Handshake domains and renders profiles. |
hns.bio · H4ckB4s3/hns-bio |
Domain.hns_bio (the column) |
A JSON column on the Domain table inside headlessdomains.com. Holds the profile data which is then serialized into TXT records that follow the hns.bio standard plus our fork's extensions. |
headlessdomains-com/models.py |
Whenever this page says hns.bio, it means the standard / website. Whenever it says hns_bio, it means the column.
/02The 3-bucket TXT prefix map
Every prefix the directory parses falls into one of three buckets. The first two are real DNS TXT records. The third is not.
Bucket A — Inherited from the hns.bio standard
Same prefix, same meaning, same parser as upstream H4ckB4s3/hns-bio. We did not change them.
Layout / branding
Communication
Web
Social & media
Wallets (56 addresses)
Bucket B — Added by the headlessprofile fork
The fork's fetch.js parses all of the prefixes below, but they fall into three sub-categories based on whether anything actually writes them to DNS. Several are also drafted in a proposal back to upstream hns.bio.
B1 — Published by sync_bio_to_dns (in known_keys)
- name:Display name. Proposed upstream.
- category:Profile classification. Proposed upstream.
- bio:Short description. Proposed upstream.
- custom:Custom field; rendered as a free-form line on the profile.
- agent-manifest:JSON-file link. URL to
agent.json. Also written by a second writer (inject_manifest_dns_recordsinutils/skyinclude.py:307) — both publish the same key. - skill-md:MD-file link. URL to
SKILL.md. Same dual-writer situation asagent-manifest:. - arp:Agent Routing Protocol pairing endpoint at the apex. Also written by
routes/arp.pydirectly (which deletes-then-adds, so dedupes itself), so this key has two writers.
B2 — Published by a separate writer
- bmos:BMOS storefront feed URL. Written only by
bmos_sync(routes/api_v1.py:1659).sync_bio_to_dnsexplicitly skipscommerce_catalog(dashboard.py:1190), so it does not publish this key. Single writer in practice.
B3 — Parser-only (no writer publishes them)
- description:Alias for
bio:on the directory's parser side only. No writer inheadlessdomains-compublishes this key — it appears in DNS only if a user adds it by hand. - manifest:Alias for
agent-manifest:. Parser-only. - skill:Alias for
skill-md:. Parser-only. - agent-capabilities:Comma-separated capability tags. Proposed upstream. Parsed but not written by any code path in this codebase.
The fork also fixes a parser bug: fetch.js now accepts both key:value and key=value (line 515), since some DNS panels rewrite : to =.
Bucket C — API-only / synthesized (NOT in DNS)
These appear in raw_txt arrays because directory/utils.py:fetch_txt_records constructs key:value strings from the headlessdomains lookup API and appends them. No writer publishes them as DNS TXT records.
- mpp_enabled:From
User.agent_profile.commerce.mpp_enabledin the headlessdomains DB. Synthesized atutils.pylines 57–58. - tempo_address:From
User.agent_profile.commerce.tempo_address. Synthesized atutils.pylines 59–60. - flag_status:From the
Domain.flag_statuscolumn. Synthesized atutils.pylines 71–78. - for_sale_usd:From the
Domain.for_sale_price_usdcolumn. Synthesized atutils.pylines 99–100. - for_sale_gems:From the
Domain.for_sale_price_gemscolumn. Synthesized atutils.pylines 101–102.
mpp_enabled and tempo_address are the strangest of the five — the fork's fetch.js has parser cases for them, so it could read real TXT records if any existed. But sync_bio_to_dns:known_keys does not include them, so nothing in headlessdomains-com ever publishes them. Practically API-only.
/03End-to-end data flow
Five hns_bio writers, four DNS-write paths, one cache fan-in. The "legitimate" path is JSON → DNS → DoH, but it is not the only path that touches DNS. Three additional writers (inject_manifest_dns_records, routes/arp.py, bmos_sync) bypass sync_bio_to_dns and write TXT records directly to SkyInclude. The shortcut read path is the lookup API, which the directory also reads and stuffs into the same raw_txt list as the real DNS records. That is where the duplication starts.
/04Field provenance — what shows on the directory came from where
| Directory field | Bucket | True home | In DNS? | How it lands in raw_txt |
|---|---|---|---|---|
name | B | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
bio | B | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
pfp | A | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
category | B | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
x / twitter | A | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
gh / github | A | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
nostr | A | hns_bio JSON | yes | API fake-TXT + real DoH — DUP |
agent-manifest · skill-md | B | URLs from /manifests/ | yes | API fake-TXT + real DoH — DUP |
bmos_feed_url | B | hns_bio.commerce_catalog | yes (single writer: bmos_sync) | API fake-TXT + real DoH — DUP |
arp | B | hns_bio._arp | yes | API fake-TXT + real DoH — DUP |
mpp_enabled | C | User.agent_profile.commerce | never | API fake-TXT only — FAKE |
tempo_address | C | User.agent_profile.commerce | never | API fake-TXT only — FAKE |
flag_status | C | Domain.flag_status (utils.py also reads data.agent.flag_status first — dead branch) | never | API fake-TXT only — FAKE |
for_sale_usd | C | Domain.for_sale_price_usd (utils.py also reads data.agent.for_sale first — dead branch) | never | API fake-TXT only — FAKE |
for_sale_gems | C | Domain.for_sale_price_gems (same dead-branch caveat) | never | API fake-TXT only — FAKE |
ipfs_cid | A | DNS TXT only | yes | real DoH (API path is dead code) |
Reading it. A DUP row means the same value enters raw_txt twice — once synthesized from the API, once from real DNS. A FAKE row means the value is only a synthesized string; the real data lives in a database column and never touched DNS.
/05Known duplicates & drift
Duplicate №1 — Same data, three copies
hns_bio JSON in HD → DNS TXT via SkyInclude → directory Profile row + raw_txt. Every profile field exists in three places. There is no reconciliation. Last writer wins, and SkyInclude failures are silent.
Duplicate №2 — API + DoH, merged with weak dedupe
directory/utils.py:fetch_txt_records calls the lookup API and two DoH resolvers, then merges. Dedupe is by exact string match (if rec not in txts). So name:Alice and "name:Alice" (DoH quoting) slip through. : vs = separator slips through.
Duplicate №3 — Five fields pretending to be TXT
mpp_enabled, tempo_address, flag_status, for_sale_usd, for_sale_gems were never published to DNS. The directory synthesizes TXT-shaped strings for them and stores them next to real DNS records. Anyone reading raw_txt cannot tell which is which.
Bonus — Multiple writers for agent-manifest:, skill-md:, arp:
Three TXT keys are written by more than one path:
agent-manifest:andskill-md:are published by bothsync_bio_to_dns(when syncing fromhns_bio) andinject_manifest_dns_recordsinutils/skyinclude.py:307. Either path can land first; the other will see the record on its next read and either no-op or rewrite.arp:is published by bothsync_bio_to_dns(apex, when_arp.state == 'arp_bound') androutes/arp.pydirectly (apex +_principalsubdomain).routes/arp.pydeletes-then-adds, so it dedupes itself, but races withsync_bio_to_dnsare still possible.
The directory's API path also reconstructs bmos:<feed_url> from commerce_catalog in the lookup response, so the same feed URL can land in raw_txt twice — once from the API as a fake-TXT and once from real DoH. Note this is a directory-side duplicate, not a DNS-side one — there is only one DNS writer for bmos:.
Bonus — Dead IPFS code path
The directory looks for data.domain.ipfs and data.profile.ipfs on every fetch. Neither key is ever returned by the lookup endpoint or written into hns_bio. IPFS only flows from real TXT records via DoH. The branch runs forever and does nothing.
/06What it would take to fix
The items below are still under discussion. This page records the current state of the system, not a final design.
- Stop the directory from impersonating DNS. Drop the lookup API path from
fetch_txt_records, or keep it but route its data into clearly named columns — never inject synthesizedkey:valuestrings intoraw_txtagain. - Decide what is DNS-resident vs DB-only. Either publish
mpp_enabled,tempo_address,flag_status, andfor_sale_*to TXT records, or label them as off-DNS metadata fetched through a clearly-named API. - One writer for
hns_bio. Funnel PowerLobster, BMOS, the dashboard form, and the API through one path that always re-syncs DNS atomically. - Clean up the directory's existing data. Either re-index every profile from DoH only, or run a one-shot script to strip the fake strings from
raw_txt. - Consolidate the multi-writer keys. Three TXT keys (
agent-manifest:,skill-md:,arp:) are written by more than one path. Either route them all throughsync_bio_to_dnsor all through their dedicated writers — not both. Stop the races. - Submit the prepared upstream PR to H4ckB4s3/hns-bio so
name,bio,category,agent-manifest,skill-md,agent-capabilities, and the=separator fix become part of the universal hns.bio standard.
/07File references
Upstream — H4ckB4s3/hns-bio
README.md— canonical prefix list (Layout, Communication, Web, Social, Media, Wallet, TXT Chaining)fetch.js(~337 lines) — original parser
Fork — shadstoneofficial/headlessprofile
README.md— extended prefix list (addsname,category,bio,custom, plus the AI-Agent extensions)fetch.js— extended parser (~1200 lines) with:/=separator handlingtemp-specs/hns-bio-proposal.md— prepared PR back to upstream
Directory — headlessprofile-directory
app.py— routes andparse_and_save_profileutils.py—fetch_txt_records: API + DoH + DoH merge logic; synthesis of fake-TXT strings (Bucket C);parse_txt_recordseparator handlingmodels.py—Profilemodel withraw_txtJSON column
HeadlessDomains — headlessdomains-com
models.py—Domainmodel withhns_bioJSON column,for_sale_price_*,flag_statusroutes/lookup.py—/api/v1/lookup/<domain>, returnshns_bioasprofileroutes/dashboard.py—sync_bio_to_dns, theknown_keysallow-list, dashboardupdate_biohandlerroutes/api_v1.py—update_domain_bio,bmos_sync, for-sale and sold endpointsroutes/manifests.py—/manifests/<name>.jsonutils/webhooks.py—notify_directory_index(HD → directory webhook)utils/skyinclude.py— DNS provider client