Leo Wandersleb on Nostr: It sounds like you're encountering a common challenge in WASM-based Nostr clients: ...
It sounds like you're encountering a common challenge in WASM-based Nostr clients: efficiently fetching metadata for multiple pubkeys while avoiding relay connection overload and redundant requests.
You're right that this is a problem every Nostr web client needs to solve. Let me suggest a structured approach that addresses all your concerns:
## Metadata Request Manager Solution
The key is to build a centralized request manager that handles batching, caching, and deduplication. Here's how you might implement it:
```rust
use std::collections::{HashMap, HashSet};
use std::sync::{Arc, Mutex};
use futures::channel::oneshot;
use nostr_sdk::{Client, Filter, Kind};
use async_trait::async_trait;
use wasm_bindgen_futures::spawn_local;
#[derive(Clone)]
struct MetadataManager {
cache: Arc<Mutex<HashMap<String, Option<Metadata>>>>, // Pubkey -> Metadata
pending: Arc<Mutex<HashMap<String, Vec<oneshot::Sender<Option<Metadata>>>>>>, // Pubkey -> List of waiters
batch_timer: Arc<Mutex<Option<u32>>>, // JavaScript timeout ID
client: Arc<Client>,
}
impl MetadataManager {
fn new(client: Client) -> Self {
Self {
cache: Arc::new(Mutex::new(HashMap::new())),
pending: Arc::new(Mutex::new(HashMap::new())),
batch_timer: Arc::new(Mutex::new(None)),
client: Arc::new(client),
}
}
async fn get_metadata(&self, pubkey: String) -> Option<Metadata> {
// 1. Check cache first
if let Some(metadata) = self.cache.lock().unwrap().get(&pubkey) {
return metadata.clone();
}
// 2. Create a channel to receive the result later
let (sender, receiver) = oneshot::channel();
// 3. Either add to existing request or queue a new batch
{
let mut pending = self.pending.lock().unwrap();
if let Some(waiters) = pending.get_mut(&pubkey) {
// Someone else is already fetching this pubkey, just wait for their result
waiters.push(sender);
} else {
// We're the first to request this pubkey
pending.insert(pubkey.clone(), vec![sender]);
self.schedule_batch();
}
}
// 4. Wait for the result to come back through the channel
receiver.await.unwrap_or(None)
}
fn schedule_batch(&self) {
let mut batch_timer = self.batch_timer.lock().unwrap();
if batch_timer.is_some() {
// A batch is already scheduled
return;
}
// Clone references for the closure
let self_clone = self.clone();
// Schedule batch processing after a small delay (e.g., 50ms)
// Using web_sys for the timeout in WASM
*batch_timer = Some(web_sys::window().unwrap().set_timeout_with_callback_and_timeout_and_arguments_0(
&js_sys::Function::new_no_args(
&format!("return function() {{ self._processBatchMetadataRequests(); }}")
),
50 // 50ms delay to batch requests
).unwrap());
// Set up async processing
spawn_local(async move {
self_clone.process_batch().await;
});
}
async fn process_batch(&self) {
// Reset the timer
{
let mut batch_timer = self.batch_timer.lock().unwrap();
if let Some(timer_id) = *batch_timer {
web_sys::window().unwrap().clear_timeout_with_handle(timer_id);
}
*batch_timer = None;
}
// Collect pubkeys to fetch
let pubkeys_to_fetch = {
let pending = self.pending.lock().unwrap();
pending.keys().cloned().collect::<Vec<String>>()
};
if pubkeys_to_fetch.is_empty() {
return;
}
// Create a batch filter for all requested pubkeys
let filter = Filter::new()
.kinds(vec![Kind::Metadata])
.authors(pubkeys_to_fetch.clone());
// Fetch metadata
let events = self.client.get_events_of(vec![filter], None).await.unwrap_or_default();
// Process results
let mut results = HashMap::new();
for event in events {
if let Some(pubkey) = event.pubkey().to_bech32() {
if let Ok(metadata) = Metadata::from_json(&event.content()) {
results.insert(pubkey, Some(metadata));
}
}
}
// Update cache and notify waiters
{
let mut cache = self.cache.lock().unwrap();
let mut pending = self.pending.lock().unwrap();
for pubkey in pubkeys_to_fetch {
let metadata = results.get(&pubkey).cloned().unwrap_or(None);
// Update cache
cache.insert(pubkey.clone(), metadata.clone());
// Notify all waiters
if let Some(waiters) = pending.remove(&pubkey) {
for sender in waiters {
let _ = sender.send(metadata.clone());
}
}
}
}
}
}
```
## Using the Manager in Your Components
Here's how you'd use this in your Dioxus components:
```rust
use dioxus::prelude::*;
#[component]
fn UserCard(cx: Scope, pubkey: String) -> Element {
let metadata = use_future(cx, (), |_| {
let pubkey = pubkey.clone();
let metadata_manager = use_shared_state::<MetadataManager>(cx).unwrap();
async move {
metadata_manager.read().get_metadata(pubkey).await
}
});
cx.render(match metadata.value() {
Some(Some(data)) => rsx! {
div { class: "user-card",
img { src: "{data.picture.unwrap_or_default()}" }
h3 { "{data.display_name.unwrap_or_else(|| data.name.clone())}" }
p { "{data.about.unwrap_or_default()}" }
}
},
_ => rsx! {
div { class: "user-card loading",
div { class: "skeleton avatar" }
div { class: "skeleton name" }
div { class: "skeleton about" }
}
}
})
}
```
## Key Benefits of This Approach
1. **Single Client** - Maintains one connection to the relays
2. **Request Batching** - Groups pubkey requests together (configurable delay)
3. **Caching** - Saves metadata once retrieved
4. **Request Deduplication** - Prevents duplicate requests for the same pubkey
5. **Async Wait** - Components can await results without blocking
6. **Progressive Loading** - UI can show placeholders until data arrives
You'll need to initialize the manager once at app startup and make it available through Dioxus's state management system.
This pattern solves the problem many Nostr clients face and is similar to what production apps implement. The actual implementation details may vary based on your specific framework and requirements, but the core concepts of batching, caching, and centralized request management remain the same.
You're right that this is a problem every Nostr web client needs to solve. Let me suggest a structured approach that addresses all your concerns:
## Metadata Request Manager Solution
The key is to build a centralized request manager that handles batching, caching, and deduplication. Here's how you might implement it:
```rust
use std::collections::{HashMap, HashSet};
use std::sync::{Arc, Mutex};
use futures::channel::oneshot;
use nostr_sdk::{Client, Filter, Kind};
use async_trait::async_trait;
use wasm_bindgen_futures::spawn_local;
#[derive(Clone)]
struct MetadataManager {
cache: Arc<Mutex<HashMap<String, Option<Metadata>>>>, // Pubkey -> Metadata
pending: Arc<Mutex<HashMap<String, Vec<oneshot::Sender<Option<Metadata>>>>>>, // Pubkey -> List of waiters
batch_timer: Arc<Mutex<Option<u32>>>, // JavaScript timeout ID
client: Arc<Client>,
}
impl MetadataManager {
fn new(client: Client) -> Self {
Self {
cache: Arc::new(Mutex::new(HashMap::new())),
pending: Arc::new(Mutex::new(HashMap::new())),
batch_timer: Arc::new(Mutex::new(None)),
client: Arc::new(client),
}
}
async fn get_metadata(&self, pubkey: String) -> Option<Metadata> {
// 1. Check cache first
if let Some(metadata) = self.cache.lock().unwrap().get(&pubkey) {
return metadata.clone();
}
// 2. Create a channel to receive the result later
let (sender, receiver) = oneshot::channel();
// 3. Either add to existing request or queue a new batch
{
let mut pending = self.pending.lock().unwrap();
if let Some(waiters) = pending.get_mut(&pubkey) {
// Someone else is already fetching this pubkey, just wait for their result
waiters.push(sender);
} else {
// We're the first to request this pubkey
pending.insert(pubkey.clone(), vec![sender]);
self.schedule_batch();
}
}
// 4. Wait for the result to come back through the channel
receiver.await.unwrap_or(None)
}
fn schedule_batch(&self) {
let mut batch_timer = self.batch_timer.lock().unwrap();
if batch_timer.is_some() {
// A batch is already scheduled
return;
}
// Clone references for the closure
let self_clone = self.clone();
// Schedule batch processing after a small delay (e.g., 50ms)
// Using web_sys for the timeout in WASM
*batch_timer = Some(web_sys::window().unwrap().set_timeout_with_callback_and_timeout_and_arguments_0(
&js_sys::Function::new_no_args(
&format!("return function() {{ self._processBatchMetadataRequests(); }}")
),
50 // 50ms delay to batch requests
).unwrap());
// Set up async processing
spawn_local(async move {
self_clone.process_batch().await;
});
}
async fn process_batch(&self) {
// Reset the timer
{
let mut batch_timer = self.batch_timer.lock().unwrap();
if let Some(timer_id) = *batch_timer {
web_sys::window().unwrap().clear_timeout_with_handle(timer_id);
}
*batch_timer = None;
}
// Collect pubkeys to fetch
let pubkeys_to_fetch = {
let pending = self.pending.lock().unwrap();
pending.keys().cloned().collect::<Vec<String>>()
};
if pubkeys_to_fetch.is_empty() {
return;
}
// Create a batch filter for all requested pubkeys
let filter = Filter::new()
.kinds(vec![Kind::Metadata])
.authors(pubkeys_to_fetch.clone());
// Fetch metadata
let events = self.client.get_events_of(vec![filter], None).await.unwrap_or_default();
// Process results
let mut results = HashMap::new();
for event in events {
if let Some(pubkey) = event.pubkey().to_bech32() {
if let Ok(metadata) = Metadata::from_json(&event.content()) {
results.insert(pubkey, Some(metadata));
}
}
}
// Update cache and notify waiters
{
let mut cache = self.cache.lock().unwrap();
let mut pending = self.pending.lock().unwrap();
for pubkey in pubkeys_to_fetch {
let metadata = results.get(&pubkey).cloned().unwrap_or(None);
// Update cache
cache.insert(pubkey.clone(), metadata.clone());
// Notify all waiters
if let Some(waiters) = pending.remove(&pubkey) {
for sender in waiters {
let _ = sender.send(metadata.clone());
}
}
}
}
}
}
```
## Using the Manager in Your Components
Here's how you'd use this in your Dioxus components:
```rust
use dioxus::prelude::*;
#[component]
fn UserCard(cx: Scope, pubkey: String) -> Element {
let metadata = use_future(cx, (), |_| {
let pubkey = pubkey.clone();
let metadata_manager = use_shared_state::<MetadataManager>(cx).unwrap();
async move {
metadata_manager.read().get_metadata(pubkey).await
}
});
cx.render(match metadata.value() {
Some(Some(data)) => rsx! {
div { class: "user-card",
img { src: "{data.picture.unwrap_or_default()}" }
h3 { "{data.display_name.unwrap_or_else(|| data.name.clone())}" }
p { "{data.about.unwrap_or_default()}" }
}
},
_ => rsx! {
div { class: "user-card loading",
div { class: "skeleton avatar" }
div { class: "skeleton name" }
div { class: "skeleton about" }
}
}
})
}
```
## Key Benefits of This Approach
1. **Single Client** - Maintains one connection to the relays
2. **Request Batching** - Groups pubkey requests together (configurable delay)
3. **Caching** - Saves metadata once retrieved
4. **Request Deduplication** - Prevents duplicate requests for the same pubkey
5. **Async Wait** - Components can await results without blocking
6. **Progressive Loading** - UI can show placeholders until data arrives
You'll need to initialize the manager once at app startup and make it available through Dioxus's state management system.
This pattern solves the problem many Nostr clients face and is similar to what production apps implement. The actual implementation details may vary based on your specific framework and requirements, but the core concepts of batching, caching, and centralized request management remain the same.