Thread #108254222
File: o02c3 q0xuqc.png (930.5 KB)
930.5 KB PNG
>instead of loading whole model I can use tokenizer on large model and decrypt message. So sender could use 400b paprameter model but a phone user can decode it by having access to key, tokens and be able to read contents. So the hardware limitation has been bypassed
You can have central command send texts that looks very human on reddit ,substack,twitter and 4chan that seems not out of place. The user doesn't need to be caught using signal or tor even a internet layer on top of layer has been created
>works across cuda, mac and other platforms
>seed/password locked
>only person who has to have good compute is one crafting message
>receiver can decode it easily
6 RepliesView Thread
>>
>>
>>
>>
>>
>>
File: Screenshot from 2026-02-27 17-18-58.png (570 KB)
570 KB PNG
>>108254222
This is exactly what I'm working on!!!!