Andreas Gohr on Nostr: With the new 16k Token limit introduced by #OpenAI today, I am wondering what's ...
With the new 16k Token limit introduced by #OpenAI today, I am wondering what's better when adding context to questions from existing content. Split content into small (~1000 Tokens) or larger (~3500 Tokens) chunks? One allows to send more content pieces that are possibly relevant, the other might create more coherent context? Maybe
npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql (npub1f6a…pnql) knows?
Published at
2023-06-14 05:45:14Event JSON
{
"id": "2e649a08f252267f2ec996a98342352cf484c7f5d56177095da774ca1466e0a7",
"pubkey": "cf98c6ca4283d57c079787b49c2af5f4fe92770ea656018b466fde5e74acc9d2",
"created_at": 1686721514,
"kind": 1,
"tags": [
[
"p",
"4ebb1885240ebc43fff7e4ff71a4f4a1b75f4e296809b61932f10de3e34c026b",
"wss://relay.mostr.pub"
],
[
"p",
"8b0be93ed69c30e9a68159fd384fd8308ce4bbf16c39e840e0803dcb6c08720e",
"wss://relay.mostr.pub"
],
[
"t",
"openai"
],
[
"mostr",
"https://octodon.social/users/splitbrain/statuses/110540981162529080"
]
],
"content": "With the new 16k Token limit introduced by #OpenAI today, I am wondering what's better when adding context to questions from existing content. Split content into small (~1000 Tokens) or larger (~3500 Tokens) chunks? One allows to send more content pieces that are possibly relevant, the other might create more coherent context? Maybe nostr:npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql knows?",
"sig": "a0372367a101af99984433d2fee411a918a1775fe4625f8f9c9d83835af13ccd13ad2cbfa05626791237ebdcf18207b9a4b5b9591106228f9c4b77693025bbed"
}