Event JSON
{
"id": "7015fa37edfdfc3251f49366f6d5f0e5d9b0049117084deb2fe3fd3d696fcb49",
"pubkey": "adaee8ec352054a768bc020f8d430f7fd568da4ece7c3e11b173edab133a6302",
"created_at": 1692748568,
"kind": 1,
"tags": [
[
"p",
"04fa8b9da4d5399c922092e1293339eaec9bdafd635fd1af7bb94ec5ec8e9a0e",
"wss://relay.mostr.pub"
],
[
"p",
"2a143ab7e99da012c3e4b3fe36cdba72f7b0c5e2e4744c7f188ebbcf89157854",
"wss://relay.mostr.pub"
],
[
"e",
"38d448dc46343ae85cd351b43c863ba945c634dc4f6c8f7af5b18a972dd4625b",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://journa.host/users/jperlow/statuses/110935970179480940",
"activitypub"
]
],
"content": "nostr:npub1qnagh8dy65ueey3qjtsjjveeatkfhkhavd0artmmh98vtmywng8qzrlnj7 However, if the AI/ML cores on the A-series chips are capable of generative AI, then it's possible we might see Apple implement a Siri that uses a back-end LLM. So it is still relevant even if Mediatek is not the one to implement it or even if LLAMA 2 isn't used.",
"sig": "9e0d22e8664c69229cbdd3237648a32333c4aaee19039a233d8893c445fd6de6b8698a07b73670790ba4580b4219d81cf244cc801565d18cf9e68bdef9aca0b4"
}