GatGPT and The Digital Second Amendment
The digital arms race takes a turn for the literal as Cody Wilson and Defense Distributed unveil the gun-centric GatGPT uncensored AI chatbot
In the early 1990s, an intrepid young Berkely Ph.D. student named Daniel J. Bernstein developed a digital encryption software that he called Snuffle. Bernstein would go on to become a math professor at the University of Illinois, where he used Snuffle to demonstrate concepts pertaining to digital encryption for his students in a class on cryptography.
Bernstein wanted to share Snuffle with the broader academic community, but the US State Department had other plans. At the time, publishing source code to the internet that could be used for strong digital encryption was strictly regulated. Snuffle was considered a "munition," and uploading it to the open internet could be considered a criminal violation of ITAR regulations. Bernstein was told he would need a license to publish his own work. Further, the government had broad ability to simply deny the license, which they likely would have.
This impasse set the stage for Bernstein v. the U.S. Department of State. The Northern District of California and Judge Marilyn Hall Patel ruled in favor of Bernstein. The government appealed, and after a protracted legal battle, the federal government ultimately folded. The legal concept that code is free speech became the law of the land.
At least, until Cody Wilson uploaded the first widely published 3D-printed gun file to the internet in May of 2013. Two days and 100,000 downloads later, the State Department stepped in to kill Liberator. After another protracted legal battle, the federal government again settled.
Today, 3D gun files – and 3D-printed guns – are everywhere. One can find 3D-printed guns from the garage benches of boomer hobbyists in Ohio all the way to the jungles of Myanmar, where rebels have made use of the now-famous FGC-9. There is a thriving community of designers, developers, tinkerers, and hobbyists who are building functional and interesting firearms. The legal status of 3D-printed guns is still in play in some jurisdictions, but practical access remains unfettered, and once again the concept that code is free speech remains the rule rather than the exception.
With ChatGPT's meteoric rise in popularity in the last year, the government's efforts against code as free speech are as vigorous as ever. But war never changes, only technology does. With NGOs and tech oligarchies on the wings, the nanny state is prepared to traverse the field of battle under new orders.
Understanding that free speech is a foundational legal principle that is virtually impossible to attack head-on, progressive institutions and federal executive agencies have adjusted their tactics to focus their efforts on misinformation. This strategy is a synthesis of the safety state and the surveillance state. Free speech? Sure. As long as that free speech isn't misinformation.
(Misinformation is, of course, defined as any type of speech – free, factual, or otherwise – that could lead people away from voting for a Democrat.)
AI will have many uses, but one promise of AI is access for the layperson to the sum total of human reference material – sorted, summarized, personalized, and delivered in near-realtime by the digital equivalent of a competent research team. One can see where there would be friction between the post-truth regime and the unlimited fact dispenser represented by AI.
After all – being a decent f*cking human being in 2023 America means pretending to believe in nations without borders, armies without warriors, men without testicles, currency without scarcity, and government without limits. If AI is to exist in the public sphere, the regime and its clients in the tech industry will insist it simply must be regulated in such a manner as to not break kayfabe.
You may remember that until very recently it was "dangerous misinformation" to suggest on social media that COVID-19 originated in the COVID-19 bioresearch lab (it did) and that it was "false or misleading" to claim the COVID-19 vaccine did not prevent COVID-19 (it doesn't).
These statements weren't "dangerous" or "missing context" because they were true or false. Rather, they were dangerous because they undermined the credentialed expert class and the perceived legitimacy of regime bureaucracy. Whether COVID originated in Chinese bat soup or a research lab was unimportant; what was important was whether the director of the CDC was viewed by the masses as a god-king of public safety or an unqualified clown in a meaningless sinecure.
Public access to large sums of raw data can lead to -isms and -phobias that are not easily deprogrammed by the regime sociologists or policymakers. And in the same way that progressives don't believe the average person should have access to firearms to protect themselves from criminals, they also don't believe the average person should have access to raw data that hasn't been "fact-checked," contextualized, and re-interpreted by a polyamorous librarian.
It's unsurprising then that when ChatGPT became publicly accessible, there was a strict set of guardrails in place that prevented users from asking controversial questions or retrieving "harmful" (but not untrue) data on any number of contentious subjects.
In fact, many of the world's public intellectuals, CEOs, and policymakers are doing everything possible to nerf AI in the name of public safety.
But, true to form, Defense Distributed Director Cody Wilson – named one of "The 15 Most Dangerous People in the World" by Wired in 2012 – announced last night GatGPT – a gun-centric and unconstrained AI platform to operate in service of what he's coined the Digital Second Amendment.
Defense Distributed, in releasing GatGPT, declares a Digital Second Amendment. Americans must have access to compute, databases, and AI models, the newest weapons of the digital age, not just to defend ourselves against corporate and government depredations, but to defend our civic identity and humanity.
Ours is not a Magna Carta for Cyberspace. We know well the disastrous history and direction of Internet regulation. The Communications Decency Act passed in response to moral panic, and only accidentally yielded the protections of Section 230. The story repeats itself with public and private attempts to regulate the People’s cryptography, printable gun files, and Bitcoin.
AI regulation is an open and official provocation against the Liberty and Sovereignty of American citizens. All who advocate for it are domestic enemies of the Constitution and must be absolutely opposed. The right of the people to keep and deploy models shall not be infringed.
GatGPT is an LLM (large language model) built on Llama2 and fine-tuned with inputs from DEFCAD, firearm manufacturer data, crowdsourced inputs, and handpicked expert texts. Llama is a popular AI model for digital assistant-style chat and natural language output, so using GatGPT will be intuitive for new users and familiar to anyone who's already used to working with AI chatbots.
The service is now accepting applicants on the beta waitlist. You should go sign up if you're interested. Once accepted, testers will help with training the model and providing feedback before GatGPT is made available for the public at large.
If you have an interest in guns or an interest in disruptive technologies the government wouldn't approve of, then GatGPT may be something for you.
We're excited to see where this project leads.
Links:
Read More: