Thread #108254222
HomeIndexCatalogAll ThreadsNew ThreadReply
H
>instead of loading whole model I can use tokenizer on large model and decrypt message. So sender could use 400b paprameter model but a phone user can decode it by having access to key, tokens and be able to read contents. So the hardware limitation has been bypassed


You can have central command send texts that looks very human on reddit ,substack,twitter and 4chan that seems not out of place. The user doesn't need to be caught using signal or tor even a internet layer on top of layer has been created

>works across cuda, mac and other platforms
>seed/password locked
>only person who has to have good compute is one crafting message
>receiver can decode it easily
+Showing all 6 replies.
>>
>>108254222
did your AI tell you that this was a brilliant idea, and that you're absolutely right?
>>
>>108254222
based department. bumping for interest.
>>
>>108254396
>did your AI tell you that this was a brilliant idea, and that you're absolutely right?


LOl why do u ask jealous
>>
>>108254222
the server generating the message can decode it easily too. tokenization is not encryption. I'm honestly not sure what you think is novel about this
>>
>>108254675
>the server generating the message can decode it easily too. tokenization is not encryption. I'm honestly not sure what you think is novel about this

yeah that's why i own server
>>
>>108254222
This is exactly what I'm working on!!!!

Reply to Thread #108254222


Supported: JPG, PNG, GIF, WebP, WebM, MP4, MP3 (max 4MB)