kamstrup

joined 1 year ago
[–] kamstrup@programming.dev 2 points 5 days ago

Interesting observation! The most simple explanation would be that it is memory claimed by the Go runtime during parsing of the incoming bson from Mongo. You can try calling runtime.GC() 3 times after ingest and see if it changes your memory. Go does not free memory to the OS immediately, but this should do it.

2 other options, a bit more speculative:

Go maps have been known to have a bit of overhead in particular for small maps. Even when calling make() with the correct capacity. That doesn't fit well with the memory profile you posted well, as I didn't see any map container memory in there...

More probable might be that map keys are duplicated. So if you have 100 maps with the key "hello" you have 100 copies of the string "hello" in memory. Ideally all 100 maps qould share the same string instance. This often happens when parsing data from an incoming stream. You can either try to manually dedup the stringa, see if the mongo driver has the option, or use the new 'unique' package in Go 1.23

 

In the original proof of concept for ranging over functions, iter.Pull was implemented via goroutines and channels, which has a massive overhead.

When I dug in to see what the released code did I was delighted to see that the go devs implemented actual coroutines to power it. Which is one of the only ways to get sensible performance from this.

Will the coro package be exposed as public API in the future? Here's to hoping ♥️

[–] kamstrup@programming.dev 1 points 3 months ago
[–] kamstrup@programming.dev 4 points 3 months ago

There is manual memory management, so it seems closer to Zig

[–] kamstrup@programming.dev 1 points 3 months ago

There is a dangerously large population of devs and managers that look at themselves, unironically, as the gigachads pumping out ui "upgrades"

Many of these fail to realize how disruptive it is. UI change is like API breakage for the brain.

I have lost track of how many times I've tried to help an elderly family member with an app after some pointless, trivial, ui change. Only ending with them entirely giving up on using the app after the "upgrade" because the cognitive overhead of the change is beyond the skill that can fairly be expected for them 💔

[–] kamstrup@programming.dev 4 points 4 months ago

The context package is such a big mistake. But at this point we just have to live with it and accept our fate because it's used everywhere

It adds boilerplate everywhere, is easily misused, can cause resource leaks, has highly ambiguous conotations for methods that take a ctx: Does the function do IO? Is it cancellable? What transactional semantics are there if you cancel the context during method execution.

Almost all devs just blindly throw it around without thinking about these things

And dont get me startet on all the ctx.Value() calls that traverse a linked list

[–] kamstrup@programming.dev 1 points 4 months ago* (last edited 4 months ago)

Depending on your needs you can also break it into a columnar format with some standard compression on top. This allows you to search individual fields without looking at the rest.

It also compress exceptionally well, and "rare" fields will be null in most records, so run length encoding will compress them to near zero

See fx parquet

[–] kamstrup@programming.dev 2 points 10 months ago* (last edited 10 months ago)

Postgres and MySQL/mariadb are all primarily written in C.

Contrary to what other posters here claim, most programming languages are not written in C, but are self hosted. Ie. written using themselves. This usually involves a small bootstrapping component written in C or something similar, but that is a minor part of a whole

[–] kamstrup@programming.dev 29 points 11 months ago

That we stop fawning over tech CEOs

[–] kamstrup@programming.dev 5 points 11 months ago

Thank you for saying this. Sometimes I feel like I sm the only one thinking like this 🙇♥️

 

Go 1.22 will ship with "range over int" and experimental support for "range over func" 🥳