massysett a day ago | next |

I’m an old-timer, I’m surprised that paying for shared hosting is now “self-hosting.” Nothing wrong with that, but that would never have been called self-hosting ten years ago.

I guess it’s like how “cooking from scratch” evolved. A cookbook from the nineteenth century might have said “1 hog” as an ingredient and instructed you to slaughter it. Now of course you buy hog pieces on foam trays.

apitman a day ago | root | parent | next |

There are other issues with the terminology as well. The self hosting community (centered at /r/selfhosted) has a very technical vibe. These people enjoy tinkering with computers. They're like kit car builders.

But there's a whole market of people who could benefit from self hosting, but shouldn't be required to understand all the details.

For example, you can get many of these benefits by using a managed service with your own domain. Things like data ownership, open source software, provider competition, etc.

I think we need a broader term. I've been using "indie hosting" lately.

simcop2387 a day ago | root | parent | next |

This is the kind of thing I've been watching unfold with some home "NAS" boxes over the past couple of years. It started much earlier but it's started to become more of a differentiating factor in some of the lines lately because the NAS side of things is basically entirely a solved problem for 99% of people, so the manufacturers (Synology, QNAP, Terramaster, U-Green, etc.) have been adding support for doing what looks a lot like turn-key installation of things like NextCloud, Plex, and a bunch of other services that the self-hosting community has been talking about for years.

I think one of the big drivers of it has been the serious increase in performance and capability of the low power embedded processors from Intel and AMD (and in the last year or so some ARM based ones), like supporting more than 2GB of ram and having multiple cores that can meaningfully do work even with a 15W TDP.

Dalewyn a day ago | root | parent |

I am of the impression that Synology is pivoting away.

>Starting from this version, the processing of media files using HEVC (H.265), AVC (H.264), and VC-1 codecs will be transitioned from the server to end devices to reduce unnecessary resource usage on the system and enhance system efficiency.

https://www.synology.com/en-us/releaseNote/DSM

They say it's to "reduce unnecessary resource usage" and "enhance efficiency", I say it's the start of a race to the bottom of the barrel now that the market is saturated and BOMs start weighing heavier.

apitman a day ago | root | parent |

If my device supports the native format of the content, I definitely want it decoded there rather than transcoding on the server. Assuming said format isn't significantly more power hungry than the transcoded codec.

Arn_Thor 13 hours ago | root | parent |

Sure. But they’re not giving the user a choice in the matter. Also, it’s transcoded on the device as it’s backing up, which is the last thing I want to spend battery power on. My NAS is on and plugged into the wall 24/7 for a reason

immibis a day ago | root | parent | prev |

I call it self-hosting when it's on your server, and hosting at home, if we want to be specific that the server is at home.

apitman a day ago | root | parent |

What would you call it if it's hosted on someone else's server, but using open source software under your domain, and you have a complete backup of all the data so you could move it home or to another provider whenever you want?

transpute a day ago | root | parent | next |

For the last 30 years, that's been called "web hosting" [1]:

  Shared custody
  No confidentiality
  Portable domain identity
[1] https://www.webhostingtalk.com

apitman a day ago | root | parent |

If someone thinks to themselves: "I really don't like the ways twitter is changing. I'm leaving, but is there anything I can do to avoid the same thing happening with some other app/company?"

If they search around for an answer to that question, pretty soon someone is going to tell them to "self-host a Mastodon instance" or in the near future "self-host an ATProto instance".

My point is that the term "self-hosting" is unlikely to get them what they want, unless they happen to be interested in learning about DNS, IP addresses, ports, port forwarding, routers, firewalls, NAT, CGNAT, TLS, TCP, HTTP, web servers, Linux, updates, backups, etc, etc.

I don't think "web hosting" is going to help them much either.

What most people want is something like a Mastodon instance from masto.host[0] that integrates with a service like TakingNames[1] (which I own) to delegate DNS with OAuth2. I think we need a new term for this sort of setup. I think the term should also include self-hosting solutions, as long as those solutions focus on the outcomes (having a car to drive), not the implementation (building a kit car).

[0]: https://masto.host/

[1]: https://takingnames.io/blog/introducing-takingnames-io

johnklos 17 hours ago | root | parent | next |

I see both sides. While "self hosting" has always meant hosting yourself, and hosting on other people's systems isn't hosting something yourself, I can see how people can get confused and can call running their self-configured software on a rented VM "self hosting".

It's not as unambiguously incorrect as other silly things people say and do that are technically incorrect, but it is annoying when people don't provide enough context where it matters.

Honestly, the distinction only really matters when discussing privacy. Hosting your own stuff in a rented VM is still self hosting, but if you're talking about how you self host because you care about the security of your data, you're now definitely not talking about rented VMs.

Generally, I think we need to get used to the idea that "self hosting" now also refers to hosting software you configure on rented systems / VMs.

transpute 21 hours ago | root | parent | prev |

> don't like the ways twitter is changing. I'm leaving

Has there been work to quantify relative network effects in Twitter vs Mastodon, either generally or in specific communities? e.g. if person A was following N people on Twitter (e.g. in a list), what subset or superset of N could be followed on Mastdon?

If a user requested all their data from Twitter, including people being followed, is there tooling to map user identity/handles from Twitter to member names on decentralized alternatives?

> someone is going to tell them to "self-host a Mastodon instance.. from masto.host

Wouldn't that be masto-hosted rather than self-hosted?

In that scenario, Masto.host would be a trusted custodian of a social media identity, somewhat like a bank.

kayson a day ago | root | parent | prev | next |

There are definitely plenty of people who would say that using a hosting provider doesn't count, even if you're deploying the software yourself.

The one generally accepted exception to this is network protection. You don't want to expose your home ip address to the outside world if you can help it, so a lot of people use tailscale, cloud flare tunnels, or a vps as a proxy.

Cyph0n a day ago | root | parent | next |

A VPS that proxies traffic over Tailscale is another neat option. I use this approach to serve self-hosted services that I want to be accessible over the internet.

bauruine a day ago | root | parent |

Why use Tailscale if you can just setup a WireGuard tunnel?

freedomben a day ago | root | parent | next |

Tailscale is far, far less work to set up and maintain. Not to use a cliche, but it reminds me of Dropbox vs. rsync.

If you know Wireguard well enough to set up your own and you're willing, you'll have a lot more control and less dependency, which is a win IMHO. But if you are limited by time and/or knowledge, Tailscale is great

darkwater a day ago | root | parent | next |

Aren't we talking about self-hosting, tinkering with your software for fun and hobby instead of going the SaaS way? Arguing about WG instead of TS in this context is perfectly fine

freedomben a day ago | root | parent | next |

Indeed, if you got the impression from my comment that I didn't think a debate on WG vs. TS was fine, then I apologize. I think it's a great (and important) thing to debate. My opinion is as stated. I think it's a different cost-benefit analysis for each person depending on time and/or knowledge.

darkwater a day ago | root | parent |

Don't worry!

Staying on the topic, I wonder how easy/complicated is to self-host Head scale, which is the opensource implementation of the TS server.

al_borland a day ago | root | parent | prev | next |

Some people want the control without it becoming a full time hobby.

I wanted a NAS. I could do it with Linux and ZFS, rolling my own with full control. However, I didn’t want to sink that much time into it, and figured when something needed to be done, I would have forgotten so much I’d need to relearn over and over again.

Instead I went with a Synology. I get my NAS, I’m in control of my data, I can run some stuff with Docker on it… but I don’t really have to spend any time playing sys admin on my weekends.

pxc 18 hours ago | root | parent | prev |

Wireguard is very easy to set up imo.

Tailscale adds a lot of conveniences on top of Wireguard, though. I don't think most of their value comes from just eliminating the key management stuff from Wireguard setup.

homebrewer 20 hours ago | root | parent | prev | next |

Because they have good PR. Mesh networks are a dime a dozen, some of them have existed for decades and do not even rely on a central server (see tinc for an example).

There are more lightweight projects that rely on native kernel mode wireguard (thus giving fantastic performance) and only simplify key setup, without the need for persistent daemons that have had their own high severity CVEs. If you're asking this question, you might be better served by something like innernet (again, there are tons of alternatives).

There are more alternatives that are fully open and self hostable (including all server components), have support for the native kernel module, while having the same feature set as Tailscale (like netbird, but it's not the only one).

But TS is an HN darling because their devs have a presence here, some of them very well known and highly visible, and the company places lost of advertisements in podcasts and such.

Saris a day ago | root | parent | prev | next |

Just ease of use mostly, Tailscale works even behind CGNAT and automatically manages things for you.

apitman a day ago | root | parent |

I think you're unlikely to have a very good experience with Tailscale behind CGNAT if you're doing anything high bandwidth like video streaming from a Plex/Jellyfin server.

AFAIK Tailscale only supports 2 modes of connection: direct connect or relayed over WebSockets with their DERP protocol. CGNAT is going to limit you to DERP, which is not designed for transmitting a lot of data. For one thing, that could get rather expensive for Tailscale.

Saris a day ago | root | parent |

Oh yeah it's not going to be very fast, but for general usage that doesn't involve large transfers it's fine.

icedchai a day ago | root | parent | prev | next |

I have a VPS configured for BGP peering, using my own ASN, tunneling an IPv4 block and a couple of IPv6 blocks back to my home network over a wireguard tunnel. These wind up on their own VLANs, exposing a few VMs directly to the Internet.

It took a bit of time to set this up (and I fortunately had the V4 block already registered from back in the 90's.) I also had experience with BGP from previous jobs at early ISPs, which helped. Proxying is easier.

ang_cire a day ago | root | parent | prev |

In my case I am just interested in the software I'm running behind the proxy. I use CF tunnels to expose my internal services, and spend my tinkering time on the actual services, rather than (to me) wasting the time to bother with worrying about updating IPs or setting up custom auth schemes (I keep a lot of my services locked down entirely behind github SSO, so you can't even reach my e.g. Jellyfin login page without first being auth'd to github as me, which basically prevents all brute-force attempts on my services).

_heimdall a day ago | root | parent | prev | next |

I must be becoming an old timer too, I only really consider it self hosting if its on my own hardware.

In case that doesn't make me an old timer, I also actually have pork and home cured bacon in the freezer from hogs we raised and processed. "An old soul living in a new world" feels pretty fitting here.

chris_armstrong 18 hours ago | root | parent | prev | next |

It's different degrees of the same thing I would have thought.

If you're running your own box, you still depend on network infrastructure and uplink of a service provider, whereas a cloud infrastructure provider may go the other way and negotiate direct connections themselves.

Plenty of valuable lessons await for those who even just provision a virtual host inside AWS and configure the operating system and its software themselves. Other lessons for those who rack up their own servers and VNETs and install them in a data-centre provider instead of running them onsite.

There's only so much you can or should or want to do yourself, and its about finding the combination and degree that works for you and your goals.

bankcust08385 a day ago | root | parent | prev | next |

Renting space in a colo and running EBGP on leased dark fiber to HE is real self hosting. VPSes while more convenient are definitely nothing like running metal.

For a lot of stuff that doesn't need constant public network connectivity, I choose to run a home lab.

ang_cire a day ago | root | parent | prev | next |

If you are using a hosting provider, you are by definition not "self hosting", since you are in fact, not hosting (unless you happen to own the hosting provider company).

I actually self-host tools, and that involves having (in my case) a couple of rackmount servers in my spare bathroom, and an rPi5 with a 4x m.2 hat on my desk. Hell, even just running stuff on your own desktop/laptop is self-hosting.

But PaaS and SaaS are just as not-self-hosted as IaaS is. It's literally cloud hosting.

drpixie 20 hours ago | root | parent | next |

Yup - as they say, "the cloud" is just some else's computer.

It's not so hard to genuinely self-host. You just need a reasonable ISP who is willing to open your connection, and to be sensible about securing your systems.

mervz 15 hours ago | root | parent | prev |

What if I rent physical server space at a colocation? Is that "self-hosted" by your definition? I'm curious where the imaginary line is drawn.

ang_cire 14 hours ago | root | parent |

So in other words, the colo data center is hosting the server for you, rather than you hosting it yourself?

This isn't hard, but sure, just pretend words are all nebulous and ineffable and "imaginary".

FartinMowler 7 hours ago | root | parent |

Ba-humbug! Hosting at home on a server purchased from a vendor like Dell? That's not true self hosting either. A true Scotsman self hosts on hardware he soldered up himself. /s

diggan a day ago | root | parent | prev | next |

Seems to me the term "self-hosting" tends to auto-adjust its position based on the other end. So if "not self-hosting" is hosting on a shared VPS, then self-hosting is hosting on a computer at home. But "not self-hosting" has now become "hosted in cloud" so self-hosting moved to "shared VPS" instead, as the other end moved.

Kind of makes sense, but kind of also makes historical texts more difficult to understand. In the year 2124, who know what "self-hosting" meant in 2054? I guess it's up to future software archeologists to figure out.

icedchai a day ago | root | parent |

Yes, the goalposts move. When I started w/the internet 30+ years ago, self hosting meant your own 56K leased line.

dylan604 a day ago | root | parent | prev | next |

Hosting from home was always subject to home ISP ToS limits on doing that very thing. When I self-hosted in the early days, it was still paying someone to mount my system in their rack and use their network. So whether that was hardware that I rented from them, built the box myself, or using a VM they provide, it's still the same amount of work to maintain it. That's still different from using Wix/Squarespace, geocities, or using a social media platform.

ang_cire a day ago | root | parent | next |

> was always subject to home ISP ToS limits on doing that very thing

Every ISP prohibition on self-hosting that I have seen specifies commercial use, not just hosting services (since obviously that could technically prohibit tons of normal and authorized uses like co-op games).

koziserek 9 hours ago | root | parent | prev | next |

Indeed nowadays killing an animal is hidden behind industrial vail.

Unfortunately, it also normalized and further desensitivied us to the topic.

Though I'm quite happy to see that eating sentient beings gets out of fashion, at least in developed world.

hk1337 a day ago | root | parent | prev | next |

I'm not sure what your classification of "old-timer" is with how it compares to me but I would think of myself as an old-timer as Gen X.

I feel like there's another term for what you're thinking of but I cannot come up with what it is.

Self-hosting definitely was locally hosted on your own hardware back when hosting providers like Linode, Digital Ocean, AWS, etc existed or were as customizable.

Even corporations "self-host" GitHub Enterprise or Gitlab when they set it up on AWS. Self-host just means you're not reliant on creator of the application to host it for you and manage the server.

There are certainly advantages and disadvantages to self-hosting on your own hardware, as there are to using a hosting provider.

user01010 a day ago | root | parent |

You reckon you could even use Github to archive some small things? If Github or GitLab suffers, then some parts of the internet will also have problems, correct? Legitimately asking, is there any way for Github to go around searching for "no-code" content through countless private repositories?

hk1337 a day ago | root | parent |

I don't think Github could read your self-hosted Github instance unless there's some code in there that calls back home or provides home the ability to search code in your instance.

In the beginning, self-hosting was seen as completely local partially because there were no good options for hosting on a server, so that's probably where it sort became synonymous with hosting it in your home.

lelanthran a day ago | root | parent | prev | next |

> I’m an old-timer, I’m surprised that paying for shared hosting is now “self-hosting.” Nothing wrong with that, but that would never have been called self-hosting ten years ago.

Depends, maybe? Was the speaker talking about hardware or software 10 years ago?

Because, when I was given the 'self-hosting' option by some SaaS vendor, it meant that I could host it on whatever I want to independent of the vendor, whether that is a rack in my bedroom or a DO droplet.

When I was given the 'self-hosting' option by some computer vendor (Dell, HP, Sun, etc), it meant that I can put the unit into a rack in my bedroom.

Context was always key; in my mind, nothing has changed.

bityard a day ago | root | parent | prev | next |

I have always understood self-hosting to mean being in charge of your applications and data instead of delegating it to a company. An example might be, setting up Nextcloud instead of Dropbox. Or Taiga instead of Trello.

WHERE and HOW it is hosted, is less important to me. Because if you self-host your own tools, you can freely pick them up and move them to any hosting provider, a cloud provider, or a Raspberry Pi in your basement. Self-hosting FREES you from infra/vendor lock-in.

al_borland a day ago | root | parent |

But isn’t using a 3rd party web host giving up some of that control. Hopefully a reputable hosting company won’t shutdown at a moments notice, but could. Or if they go down, you’re stuck sitting there waiting for them to come back online with no access to your services.

Hosting from home has its own challenges, so I get why people would go to a hosting provider, but I do think some control is given up in the process.

bityard 3 minutes ago | root | parent | next |

It depends on what you want to control. As I stated, I want full control over my apps and data. I am more than happy to rent power, compute, storage and bandwidth from someone else. I ran the math and found that running my own server 24/7 at home would increase my electricity bill by more than what I currently pay for my VPSes.

I self-host my stuff on two different third-party providers. Partly because my residential internet is not suitable for self-hosting and partly because I trust the infra in a profit-motivated datacenter to have WAY more 9s of uptime than anything I could cobble together in my basement. This stuff helps run my life, it's not my hobby, nor something I want to spend more than the necessary amount of time managing.

If I wake up tomorrow and my providers have gone dark without any warning, I am back in action in just a few simple steps:

1. Purchase a new VPS or two

2. Run ansible playbooks

3. Restore data from backups

immibis a day ago | root | parent | prev |

You retain most of the control. You have actual laws protecting you from them snooping on your database. If it goes down, then you have a backup, right? so redeploy the backup onto any other provider or at home.

al_borland 21 hours ago | root | parent | next |

Are those recent laws? It wasn’t a database, but many years ago I had a web host tell me to remove certain files from their servers or they would terminate my account. The stuff wasn’t publicly accessible, I just had it available for myself via FTP so I could get at them from a couple locations. So there was some snooping going on.

user01010 a day ago | root | parent | prev |

What of the event that your SSDs, HDDs, Discs, home devices, etc... stop working? Fire up Torrents or go back to Usenet? Just asking, but you still have online backups and they can't check your database right.

hoosieree a day ago | root | parent | prev |

Yeah, my list of requirements for self-hosting starts with:

1. battery backup

That said, I'm not zealous about it. "Perfect is the enemy of good" and I like ecosystem diversity in general. Better to have a few dozen shared hosting providers than 2 or 3 monopolies.

asar a day ago | prev | next |

Love self-hosting and really got into it over the last couple of months. I run a bunch of services for my company now and also in my home lab. I use a Hetzner VPS and provision things either via ansible + docker compose files or via https://github.com/coollabsio/coolify/.

The awesome-selfhosted repository is also a great place to find projects to self-host but lacks some features for ease-of-use, which is why I've created a directory with some UX improvements on https://selfhostedworld.com. It has search, filters projects by stars, trending, date and also has a dark-mode.

user_7832 a day ago | root | parent | next |

Since you seem knowledgeable on this topic I'd like to ask - how risky is it to expose a computer on your network to the internet, if you're somewhat tech-savvy but not very familiar with networking? Is it relatively "safe" with modern tools and VMs or do you need to stay on top and (for eg) always ensure you're updating software weekly?

I've thought of setting up and running a server for a long time and finally have a spare laptop so I'm thinking of actually running a NAS at least.

packetlost a day ago | root | parent | next |

I've been doing it for about 13 years now with HTTP/s (80, 443), SSH (22), MOSH (lol idk), and IRC (6697) exposed to the internet. You don't need it, but something like fail2ban or crowdsec is a good idea. You will get spammed with attempts to break in using default passwords for commodity routers (Ubiquiti's `ubnt` is rather popular), but if you're up to date and take a few minor precautions it's not all that hard and/or dangerous. That being said, there are alternatives such as Tailscale that are strictly more secure but far less flexible. I've heard of people using Cloudflare tunnels as well, but I'd rather not rely on big players for stuff like that if I'm going through the effort to self host (and don't have any real risk of DDoS).

I would try to set up automatic updates for critical security patches or update about weekly. I know people that self host and do it monthly and they seem fine too. Most anything super scary vulnerability wise is on the front page here for awhile, so if you read regularly you'll probably see when a quick update is prudent. I personally use NixOS for all of my servers and have auto-updates configured to run daily.

An old laptop is exactly how I got started 13 years ago, they're great because they tend to be pretty power efficient and quiet too.

tasuki a day ago | root | parent | next |

My stuff is always out of date and hasn't gotten hacked yet.

I don't see why you'd want to run ssh on port 22. I run it on a different port and never get login attempts. Yes, if someone targeted me specifically of course they'd find out, but I guess that hasn't happened yet.

johndough a day ago | root | parent |

> I don't see why you'd want to run ssh on port 22.

I run ssh on port 22 because I like wasting the time of those script kiddies. Also I like to brag about half a million "hacker attacks" on my server per month.

scubbo a day ago | root | parent | prev | next |

> I've heard of people using Cloudflare tunnels as well...

As a Cloudflare Tunnels user who only recently discovered Tailscale - just go with Tailscale straight off the bat. It's magic, and smooth as butter.

packetlost a day ago | root | parent | next |

Tailscale Funnel [0] is limited to TLS-based protocols (maybe even just HTTPS) which is a non-starter for many cases.

[0]: https://tailscale.com/kb/1223/funnel

Larrikin a day ago | root | parent | prev |

Which cases? Tailscale has eliminated all my fears I had about self hosting and I've been using it a ton. The only issue I've run in to has been a single service (Withings) that uses a web hook to trigger updates for my sleep mat. Their server isn't on my tablet so I would need to expose atleast one service to the wider Internet.

packetlost a day ago | root | parent |

I'm talking specifically about Tailscale Funnel which gives ingress access to services on the tailnet from outside (ie. on the general internet). Any case that doesn't use TLS for a transport won't work. SSH being a notable one, but I can think of several others.

clvx a day ago | root | parent | prev |

On top of this, having ipv6 configured makes things harder to discover but not impossible (As long as you don't use ${ipv6_subnet}::xxxx for your hosts). You can avoid NAT and just expose the nodes you need. Most ISP assign /56 or /64 which is a humongous amount of ips. It's nice if you are just using a flat virtual network in your home lab. The amount of scanners I see for my subnet are non existent at the moment.

conradklnspl a day ago | root | parent | prev | next |

A god option is to setup a wireguard connection between workstation and servers. All traffic has to go through wireguard.

Because wireguard is UDP and only responds to valid requests, there isn't any open port from the outside. Not even ssh.

jimvdv a day ago | root | parent |

Additionally you can use Tailscale for added convenience. Tailscale is a payed service, for a simple home server you can get away with the free plan and their mobile apps work rather well.

Not affiliated with Tailscale at all just shouting them out because they do make things very easy and I often recommend them to hobbyist.

0xfeba a day ago | root | parent | prev | next |

I've been at it for over a decade. Home router has firewall exceptions for SSH (not port 22 though), TLS IRC, and 80/443, which are forwarded to my home server with fail2ban.

I run SSH (requires PKI outside local network), IRC, nextcloud, and ampache (though don't really use ampache anymore :( ).

Home server is encrypted RAID6 Arch Linux. If I had to do it again I'd forego rolling releases and use something more stable, like Debian.

Encrypted backups are done to backblaze once a month. I also have a backup drive that I plug in on occasion, encrypted of course.

Which reminds me my RAID6 drives are getting old now... I'm tempted to move to a VPS.

ang_cire a day ago | root | parent | prev | next |

It is very service-dependent. If you are wanting to run a NAS for e.g. a media server, you may want to look into Cloudflare Tunnels or Tailscale.

I set up Jellyfin and Kavita, and those are internet-exposed, but also Nextcloud, and Portainer, and Calibre, and those are behind github SSO auth, via Cloudflare. Basically, before you can hit the nextcloud login page, you have to auth to github (as me) with 2FA first, so no one can sit there and try to brute-force my nextcloud login.

diggan a day ago | root | parent | prev | next |

> Is it relatively "safe" with modern tools and VMs or do you need to stay on top and (for eg) always ensure you're updating software weekly?

First step to figure out if you actually need to be able to access it from the outside at all. If you just want a NAS, chances are you can put it on a separate VLAN/network that is only accessible within your LAN, so it wouldn't even be accessible from the outside.

If you really need it to be accessible from the outside, make sure you start with everything locked down/not accessible at all from the beginning, then step-by-step open up exactly what you want, and nothing else. Make sure the endpoints accessible is run by software you keep up to date, at least weekly if not daily.

voidUpdate a day ago | root | parent | prev | next |

You'll want to make sure everything stays up to date in case someone finds a vulnerability in whatever software you're currently using. If you have to expose stuff to the outside world, only open the ports you need to. Only allow access to a specific user with a non-default username (or at the very least disable root ssh access), and use long passwords or ssh keys. I think that's generally the bare minimum, but there are online guides to harden your stuff further like using wireguard and fail2ban and stuff

unethical_ban a day ago | root | parent | prev | next |

Keep things up to date and ideally, having your public facing servers in a DMZ/their own VLAN (separate network from your private stuff).

Administrative things like SSH and RDP are best accessed with a VPN but you can configure SSH in particular to be key-based authentication only, which is very secure.

the_gastropod a day ago | root | parent | prev |

I sat on the fence for a long time wanting to do this, and finally pulled the trigger and picked up a Synology NAS last year. I've had a blast setting up a handful of handy little self-hosted services on the thing. Highly recommend giving it a go!

I haven't had any security issues yet (knock on wood). But it seems pretty low-risk if you follow basic best practices. The only thing I have exposed to the internet is a reverse proxy that proxies to a handful of docker containers.

bongobingo1 a day ago | root | parent | prev |

Hm, is there a name for the type of software that Coolify is, where it presents a management plane for other servers, vs Dokku where it runs on the server?

b_shulha a day ago | root | parent |

Coolify and others mentioned on that website can run on the server itself as well.

It happened that Coolify provides the paid option to sponsor the development, but it is not mandatory.

meonkeys a day ago | prev | next |

The author nails it here:

> It is 2024, and I say it is time we revisited some of the fundamental joys of setting up our own systems.

Self-hosting really is joyful. It's that combination of learning, challenge, and utility.

+1 to Actual Budget

+1 to Changedetection.io

-1 for not mentioning threat modeling / security. The author uses HTTPS but leaves their websites open to the public internet. First-timers should host LAN-only or lock their stuff way down. I guess that's tricky with shared hosting without some kind of IP restriction or tunneling, though. No idea if uberspace offers something like that.

For folks getting past the initial stages of self-hosting, I'd really recommend something like Docker to run more and more different apps side by side. Bundled dependencies FTW. Shameless plug for my book, which covers the Docker method: https://selfhostbook.com

from-nibly a day ago | prev | next |

> Practically, it is foolishness, for what you save in money you lose in time and sanity.

Kubernetes gets a lot of side eyes in the self-hosted community. That's all of self hosting though. So why not go all in?

I've got 3 dell r720XDs running nixos with k3s in multi master mode. It runs rook/ceph for storage, and I've got like 12 hard drives in various sizes. My favorite party trick is yoinking a random hard drive out of the cluster while streaming videos. Does not care. Plug it back in and it's like nothing happened. I've still got tons of room and I keep finding useful things to host on it.

kayson a day ago | root | parent | next |

Plenty of people use k8s or k3s for self hosting. But for most, the added complexity doesn't buy enough for the trade-off to be worth it. Keep in mind most people have a single node, so docker does everything they need.

Personally, even with a 4 node setup (of tiny desktops; the hardware you have would easily cost me $200/mo in power bills), I use docker swarm. Old and unloved, but does everything I need for multi node deployment and orchestration with only a sliver more complexity than vanilla docker.

from-nibly a day ago | root | parent |

Yeah don't ask me about my power bill, It's definitely in the vanity realm. I have cheap power where I live so it's not anywhere near $200. Still too high though. One day I'll get solar to offset it.

Cyph0n a day ago | root | parent | prev |

I just use NixOS as a VM and run services as containers directly. Self-plug: I wrote a tool that makes it easy to run Docker Compose projects on NixOS [1].

This way, I get the advantages of NixOS config, while also being able to run arbitrary applications that might not be available on nixpkgs.

As far as storage goes, I just use ZFS on the hypervisor (Proxmox) and expose that over NFS locally.

[1] https://github.com/aksiksi/compose2nix

arrty88 a day ago | prev | next |

I'm a big fan of self hosting. I have learned a lot on a small hobby project.

for those who are curious about my setup, I bought a used Dell R630 on ebay for cheap. 1tb raid 1 on ssds, 32gb ram, 32 cores, and i am enjoying running a few small hobby apps with docker, virsh, and minikube (yes i learned all 3). I have a 1gbps fiber connection. I use a 1m cronjob to detect if my IP changes, and i use the linode api to change my DNS A records.

cutler a day ago | prev | next |

1.5Gb RAM/10Gb disk? Hetzner's basic cloud VPS comes with 4Gb RAM and 40Gb disk for E4.51.

tarruda 12 hours ago | root | parent | next |

AFAIK Nothing seems to beat Oracle cloud: https://www.oracle.com/cloud/costestimator.html

For compute:

"Each tenancy gets the first 3,000 OCPU hours and 18,000 GB hours per month for free to create Ampere A1 Compute instances. This free-tier usage is shared across Bare Metal, Virtual Machine, and Container Instances."

For block storage:

"Block Volume service Free tier allowance is for entire tenancy, 200GB total/tenancy. If there are multiple quotes for the same tenancy, only total 200GB can be applied"

In other words: you have a 4-core ARM CPU + 24GB RAM + 200GB space for free.

sandos 12 hours ago | root | parent |

Yep, I was running one of these for the longest time.. until they blocked idle instances! Hah.. thats the kind of usage free gets you, lot of people hoarding it for... nothing. I mean, I could easily have thought of stuff to load the instance up slightly but, eeh.

tarruda 10 hours ago | root | parent |

You can also add an extra 50gb of space to pay like $5/month, that way you are paying and it is still an insanely better deal than any of the other cloud providers

RajT88 14 hours ago | prev | next |

I'm with the other old timers.

If it's not your hardware running in a space you own or rent, you're not self-hosting.

Currently I have a little Micro-ITX box. But once upon a time I had a proper server rack with 6 U worth of servers, UPS, networking, etc. (Before I was married...)

crossroadsguy 19 hours ago | prev | next |

I loved the idea of PikaPods until I realised even if I use 10 small (no tiny) instances/services and that too just for me to be used really really rarely) I was getting into integral USD x 11 (or whatever the number is) cost. Can't blame them because it costs money to run things. But I would have rather preferred something that isn't that costly or doesn't go up in prices with number of services/apps used. I wish there was a cost effective solution for this self/web/app hosting.

chadsix a day ago | prev | next |

I am part of a company that promotes self hosting and provides external routing for self hosting [1]

We made Cloud Seeder [2] an open source application that makes deploying and managing your self-hosted server a 1-click issue!

Hope this comes in handy for someone! :-)

[1] https://ipv6.rs

[2] https://ipv6.rs/cloudseeder https://github.com/ipv6rslimited/cloudseeder

xiconfjs a day ago | root | parent | next |

From the FAQ: * Q: "What about IPv4?"

* A: "While IPv4 is still widely used, its necessity is diminishing as the world transitions to IPv6. (...)"

;)

DanAtC a day ago | root | parent | prev |

I like the concept, but only 5 IPs? With IPv6 you should be offering at least a /64 per tunnel.

chadsix a day ago | root | parent |

Great point!

We offer 5 because we're geared toward helping people host appliances as opposed to raw network setup! We also offer automatic RDNS with this as well as the Cloud Seeder appliance!

Thanks again for your comments and as well thoughts!

johnklos 17 hours ago | prev | next |

It's a good writeup, but I do take exception to this:

"Seriously, else-hosting is the practical option, let someone else worry about the reliability, concurrency, redundancy and availability of your systems."

Spend one time trying to get through a maze of automated phone answering systems, then try to ascertain whether the human, when you finally get them, even understands the issue, then wonder how much of what they're telling you is to just get you off the phone, all the time wondering if calling even really does anything, and you'll wonder whether it's better to blindly trust a company that likely doesn't have tech people we're allowed to talk to or to just do it ourselves.

At least when there's an issue with my things, I can address it. Although a bit of a tangent, I'd love to see a review of major hosting providers based on whether you can talk to a human, and whether said human knows anything at all about Internet stuff.

leosanchez a day ago | prev | next |

Miniflux is very good. It even has a telegram integration which will send you notification whenever a new article is published

loremm a day ago | root | parent |

in general, it's worth noting telegram bots are easy (free) to make and messages can be sent with one cURL command. Very useful, you can even set it up to send after long terminal commands so you know to check back

w10-1 a day ago | prev | next |

OK:

    This has not been a detailed step by step walkthrough 
    on how to do things, by design. You are meant to go and explore; 
    this is simply a way pointer to invigorate your curiosities
Sorry, but because I came looking for solutions, I found the invigoration aggravating, but then helpful in focusing my attention.

Scalable services and sites I can build, 10 different ways.

My enduring, blocking need is for dead-simple idiot-proof network management to safely poke a head out on public IP from home. And to make secure peer-to-peer connections. Somehow that process never converges on a solution in O(available) time.

</complaining>

transpute a day ago | root | parent |

> dead-simple idiot-proof

Recent thread: https://news.ycombinator.com/item?id=41440855#41460999

> network management to safely poke a head out on public IP from home

For remote access to private services, would Tailscale/Wireguard be an option? It can even use Apple TV as an exit node.

> secure peer-to-peer connections

Which protocols would you consider secure for P2P use, e.g. which solutions have you tried previously which failed to converge?