Jeremy List on Nostr: What if one were trainings a recurrent neural network but instead of backpropagation ...
What if one were trainings a recurrent neural network but instead of backpropagation through time one just did backpropagation within each time step but the loss function had an extra term for how well a second neural network could recover the input and previous hidden state from the next hidden state (kind of like an autoencoder)?
Published at
2024-01-20 05:55:53Event JSON
{
"id": "48f3c66f8ebf9b3eb4d56dd8e97649b4010ee7d4f2d6b68f7bb191605bc1edd4",
"pubkey": "52a02488c84a10a1ff1c12c2394fdec529db5b6b0af69ed5c855694701dcdd41",
"created_at": 1705730153,
"kind": 1,
"tags": [
[
"content-warning",
"Untested neural network idea"
],
[
"proxy",
"https://hachyderm.io/users/jeremy_list/statuses/111786731346031161",
"activitypub"
]
],
"content": "What if one were trainings a recurrent neural network but instead of backpropagation through time one just did backpropagation within each time step but the loss function had an extra term for how well a second neural network could recover the input and previous hidden state from the next hidden state (kind of like an autoencoder)?",
"sig": "183737e531c2a106d5681e4826d28bdc3879cf4d00da60aacdaee570e15a59adf5e18d5e1a620e3950f6846ba78754dcf0e27c631b60f1734614ac7bb17e1f7e"
}