tinyzimmer

joined 1 year ago
MODERATOR OF
[–] tinyzimmer@sh.itjust.works 1 points 1 year ago

Almost certainly. At its core - everything happening could be accomplished with just regular configuration files. It's just a suite around maintaining the state basically.

I was considering adding FRR or BGP to the mix at some point - but it hasn't proven necessary yet.

[–] tinyzimmer@sh.itjust.works 1 points 1 year ago

Maybe? Quite possibly it seems. I'm not famiilar with it too much.

[–] tinyzimmer@sh.itjust.works 11 points 1 year ago* (last edited 1 year ago)

It's extremely similar to Tailscale, and they are a major source of inspiration for a lot of the functionality.

The main difference is I am using a controller-less setup where each node maintains the state of their mesh via raft consensus. If a controller server goes down, another node will pick up the leader responsibilities. When requests come in that need to mutate network state, nodes will automatically forward the request to the leader node for you.

So kinda like a Tailscale - where you can disconnect and branch off at any time. Think...federated networks.

 

Hey all!

Dropping my Webmesh project (https://github.com/webmeshproj/webmesh) again as I've just reached a major milestone in my development towards making it a viable product. Webmesh is yet another pass at creating a distributed service/application mesh/VPN using WireGuard. More infoz is on the project website: https://webmeshproj.github.io/

With the new "mesh bridge" capabilities, you can run a bridge node between two or more meshes that serves to forward appropriate traffic between them. It also offers DNS forwarding capabilities to lookup internal names across meshes. This is accomplished by running two or more IPv6 only wireguard interfaces connected to each mesh and sharing routes between them. IPv4 support is planned, but honestly may not even be necessary. You can see a reference example/playground here: https://github.com/webmeshproj/webmesh/tree/main/examples/mesh-to-mesh

Excited for your feedback :)

[–] tinyzimmer@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago)

So it will work with clients behind NATs. By default the network is a little different from similar solutions in that not everyone is directly connected peer to peer. The default behavior is to branch off from the server you joined (with traffic to the rest of the network routed through them). Then via the admin API (or configuration/RBAC that needs to be better documented) you can tweak the topology by putting "edges" between devices. If there is no direct connectivity between the devices they will use ICE tunnels to connect. One of the APIs that can be exposed on nodes helps with candidate negotiation, and another one can be a TURN server if you want. Sorta demonstrated here https://github.com/webmeshproj/webmesh/tree/main/examples/direct-peerings but it's a fake test because it happens on docker networks.

To your second question, there has to be someone accessible currently. But I've included an idea of a Peer Discovery API server that devices can optionally expose. In that vein you could have a node that just provides peer discovery and nothing else.

It's kinda pointless though because the server running the API has to be a member of the cluster already - so in that way they become a "central server". I want to add more options, such as SRV lookups. Always happy for help and more ideas too :)

 

Hey all

I wanted to show off my new project, webmesh. It's yet another solution for creating WireGuard mesh networks/VPNs between multiple hosts, most similar to projects like TailScale/ZeroTier. It differs from others in that there is a controller-less architecture that maintains the network state on every node via Raft consensus. This allows for any node to become the "leader" should one go away.

Github in the link above. More infoz in the README and on the project website: https://webmeshproj.github.io

Excited to hear any feedback :)

[–] tinyzimmer@sh.itjust.works 1 points 1 year ago* (last edited 1 year ago)

Hehe I'll respond to the edit.

I actually have a lot of respect for what TailScale is doing. 99% of their shit is open source and they don't get in the way of the downstream Headscale project that lets you run your own controllers. That being said, I think it gets pricey at scale and tries to do too much for the user. Extending it isn't super easy at the moment either, but they are working on ways of embedding their agents.

I wanted to take the idea and put it on the same level of distributed internet projects like Reticulum. I think this has potential to be the networking base for a concept similar to "dApps" but removing the financial incentives that come with using blockchain.

That all being said - I'm totally considering making a managed offering of this - and am actively looking for people who'd be interested to go on that journey with me. But I'd try extremely hard to never be labeled "corporate" :P.

[–] tinyzimmer@sh.itjust.works 2 points 1 year ago

I am by no means an expert but the TLDR is Raft is a protocol that allows distributed systems to maintain a central state. The GitHub page on it is pretty good - https://raft.github.io/.

What it means for this project is that every single node keeps the database containing the entire network state (rules, addresses, routes, etc.) in-memory. At any single point in time, any of the "voting" nodes can become the "leader". The leader is responsible for authorizing nodes to join, mutating state, etc. If that leader goes away - another node will pick up the slack.

 

Hiya Folks

Making the rounds again on this project as it is getting closer to being feature-complete (ish) and I've started this website for extended infoz/documentation. Main repository can be found here: https://github.com/webmeshproj/node.

The project aims to be yet another simple WireGuard Meshing/VPN solution. Most similar to TailScale/HeadScale, but with a controller-less architecture governed by Raft consensus.

I'm excited to hear any feedback. Contributions are welcome as well :). Anything from architecture discussion, to issues, to code, to docs is appreciated.

[–] tinyzimmer@sh.itjust.works 2 points 1 year ago (1 children)

Samesies. Using three monitors on KDE for about 2 years now with no issues.

[–] tinyzimmer@sh.itjust.works 2 points 1 year ago

Thanks for the kind words :)

 

Been slowly stabilizing my new project I'm calling (for lack of a better name at the moment) Webmesh and hitting the rounds on some social networks.

It's yet another mechanism for configuring WireGuard networks, with the core difference from others being that it uses raft-consensus to maintain state between connected instances. This allows for any node to optionally be a "controller server" on top of being a spoke in the network. There is more information in the Github - but tons more documentation is needed for sure.

I'm building a plugin architecture around it as well as potentially looking to build a saas or some other type of offering. Been actively looking for another developer or two who may be interested in embarking on the adventure with me. Feel free to reach out if interested. Also happy to hear any feedback :)