Nym promises to be the 'better than #Tor' anonymous network - a bold claim but one she says they don't make lightly.
@theruran Cool some new homework. I let you know, I have to read first. Changes are, as it is something "over TCP/IP" like Tor (Vulnerable to the hidden channels + IC serial numbers tagging attack ), and running on unsafe PC's, without any critical execution protection like SGX, that it will be equivalent to Tor in terms of anonymity protection regarding major players like NSA. Still, as it is new, it way work better for a short while, before they adapt their govware to it.
@theruran By the way, talking about the fight agaibst hidden channels in general, I see two complementary approaches :
• Designing and using hidden channel safe protocols. In the current paradigm it's completely fucked up as TCP/IP itself is full of possible hidden channels.
• Garanteeing strict code execution to ensure no malware can insert data into hidden channels. Fully fucked up in the current paradigm.
And then they say guys like us are crazy to desire a new paradigm. LOL.
@theruran And my latest bet is to try to suppress the protocol notion as we know it, that should contribute solving or solve completely hidden channels issues. But this mean a complete change of paradigm for digital systems and cyberspace concept.
Do you have any other idea complementary or approches to stop hidden channels ?
@stman what about printing the messages on paper? or passing thru a transformer that displays an easily verifiable artifact?
unused bits as indicated in specs must either be formally-verified to ensure against their use or removed altogether - possibly making for awkward packing or inefficient representation.
@theruran There are two categories of hidden channels :
• Time based ones,
• Data formats / protocols ones.
Having digital systems and cyberspace concepts fully synchronous can make solving the time based ones easy.
For unused bits issue, I see two approaches : First have no unused bits (Data formats or protocols hidden channel safe), but the second approach I tend to prefer today is get rid of the protocol notion as we know it.
@theruran File formats are to be considered a special category of protocols indeed.
Visualizing things this way is helpfull to envision how to suppress both, protocols and file format.
And here, we're falling back in the path or direction I am exploring by revisiting the memoryspace concept and its dimentional caracteristics and what concept we would push for data.
As you can see, by pushing a new concept for data and memoryspaces, we can feel that we can almost solve everything.
@theruran To improve visualization, we can try to list the differences between file formats, that include the notion of file, and protocols.
We're close from a solution.
We can't see it yet because our mind are still too poluted by the existing paradigm, but you can feel as I do that we're very close.
In 2013, while giving a public conference on free integrated circuits with a nice cypherpunk red mohawk, a french military came to me and told me that hidden channels were their
@theruran worst nightmare. I do fully agree with him, and that it's also the case for me and for true crypto-anarchists.
7 years laters, on our own, you and I are about to find definitive solutions to this big issue, and many more.
But there is a price to pay that we do accept and most military refuse : Digital systems paradigm, concepts, architectures must be fully reengineered from scratch, starting blank page.
This is why we will succeed soon, and why they will fail forever.
@stman I think what you want is to transmit Abstract Syntax Trees, i.e. executable programs. They are only serialized when present in the transmission cables. There are even cryptographic ways for the sender to ensure the program is executed properly. The program's environment is encapsulated or can be swapped with a trusted environment more safely than the joke that are sandboxes today.
It's kinda like sending the image decoder with the image data and metadata. Except these programs can be a lot simpler and more standardized than they are today. We essentially know the breadth of common use cases and can design for that.
@theruran It is clear that the concept of Abstract Syntax Trees is the closest from what I am trying to extract from my visions. But it is still a concept formulated in the current digital systems paradigm.
In this regard, I am trying to improve it by trying to generalize it better, by reworking underlying concepts or fundamental concepts it relies on. I think we must stay in a concept fuzzing and merging state of mind. But we are on the right path anyway.
Remember what I used to
@theruran repeat when we started working together : There is no difference between hardware, software, protocols and network physical topologies : Everything is code. I was saying this to underline the fact that the boundaries between these specialities are purely subjective, and are shaping our mind in a way that prevents innovating or seeing other approaches to digital systems. Same corolar remark for the commonly understood concepts of personnal computers, CPU and networks.
@theruran I am reminding this because we shall try to see the AST concept in such light, but also within the new paradigm slowly forming in our minds by inventing new alternative memoryspaces concepts, but also this full synchronicity constraint, and what alternative processing units could indeed be as concepts (plurial), in a fully decentralized way.
We are in the most fascinating phase of our researches where we have visions, many elementary selected usefull concepts in mind,
@theruran several strong new constraints or caracteristics, and we're about to merge all this into brand new global approach.
We are very close at finding, inventing or simply seeing, discovering several new stunning alternative paradigms proposals that will all make sense.
It's really starting being fascinating.
We are close.
@theruran What cryptographic ways are you thinking about ?
And yes, would remain only a serialization issue, but by the way, such issues would be greatly simplified if we are in a fully synchronous paradigm.
The directions of research we have been revealing slowly with all our talks and debates are very coherent. It is obvious we are on the right path to what we want to achieve.
This is the project I was remembering and referencing in my post:
An Ironclad App lets a user securely transmit her data to a remote machine with the guarantee that every instruction executed on that machine adheres to a formal abstract specification of the app’s behavior. This does more than eliminate implementation vulnerabilities such as buffer overflows, parsing errors, or data leaks; it tells the user exactly how the app will behave at all times.
Going through the cryptography section of their website and I found some related topics that may be of interest: verifiable computing, homomorphic encryption, Secure Multi-Party Computation and EzPC, Certification of Symbolic Transactions, and Differential Privacy (also under Database Privacy)
This may catch your attention, from the EzPC page:
Secondly, to execute these protocols, one must express the computation at the low-level of circuits comprising of AND and OR gates, which is both highly cumbersome and inefficient.
So there's a lot here that ought to stimulate your imagination. This is what I imagine for the future of computing is that these cryptographic mechanisms are native and used to guarantee privacy of data and computation.
@theruran Remember our discussion about Abstract State Machines : If at the sight they were interesting, we concluded that the introduction of a VM was ruining most of the benefits in terms of proven execution because the machine could be compromized.
I would argue the same here : So instead of sending a AND / OR gates schema, we would send a kind of VHDL code, compiled and assembled in AND / OR gates on the remote device, but if such compiler / assembler is
@theruran compromized on the remote machine, what can we do then ?
• Ensure these compiler / assemblers cannot be compromized.
For this, we can use redundancy strategies, choosing two remote devices randomly, and sending the same code, with the same data set, and we should obtain the same output and implement a mecanism that checks output are the same before validating the result. It is not that hard to do since we would be in a fully synchronous paradigm.
@theruran @yaaps In order to ensure an integrated circuit integrity, typicaly an FPGA, but not only them, redundancy sending the same bitfile on two random remote FPGA is from far the most simple strategy we have at hand today, until we invent something better, which is a topic I am researchng too.
Two years ago, during the french conference organized by LibreSilicon, I was surprised that no researcher's project were dealing with post manufacturing full integrity check
@theruran @yaaps issues. We have been presented many interesting and usefull, yet very promizing research projects, mainly focused on free tool chains, but nothing to ensure IC post manufacturing FULL integrity or "on site" full integrity checking by end-users.
To me this is an essential matter that must be addressed.
When one know the NSA TAO program, their ability to intercept any parcel, change its content, and then reinject it silently into a postal or logistic
@theruran @yaaps operator stream, this means that even if you have a fab making your own chips you would personnaly check their integrity directly in the fab itself by analyzing a few random samples, you have no garantee that these IC will not be replaced by backdoored ones when the fab send them to you with logistic operators.
I have been researching several ways to implement on site full integrity checks, but also strategies like redundancy that can somehow, under
How do you know you own the machine?
If you compile an AST to gates, that's a minute, hyper-detailed accounting of the implementation. If you represent and gates and or gates as ones and zeros, the final representation is unlikely to compress more efficiently than a source code representation. So, yes. Bulky, high bandwidth, and inefficient. But...
You might still want that for a bootstrap
@stman @yaaps It's a more difficult issue because you are talking of compilers, but the research I referenced above describes how to ensure the remote machine is executing your program according to formal semantics. The benefit of sending HDL instead of TTL would be less information on the wire.
I don't understand what you say here about Abstract State Machines. It's another abstract machine that is represented mathematically (formal semantics). As with any abstract machine, we can theoretically implement it in hardware just as they did with the LISP microprocessor. The output and stepwise process of an ASM can be tested on many different machines using different software, thereby using the principle of redundancy to verify correctness. It can even be implemented as an FPGA softcore (again theoretically, I don't really know about the space requirements of typical ASMs versus what's available on a run-of-the-mill FPGA).
Unison uses a hash of the AST as an identifier, which is at right angle to the OCaps concept of an unguessable reference. You can use both together as long as the look up to execute the content of the AST is done in a local namespace
Of course that also requires a common syntax for the AST
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!