pe1uca

joined 1 year ago
MODERATOR OF
[–] pe1uca@lemmy.pe1uca.dev 3 points 1 day ago (1 children)

How does it work with linux and wine?

[–] pe1uca@lemmy.pe1uca.dev 7 points 1 day ago

If your games are in steam then this is not for you, since to use steam games you need to use the steam client.
This is for games bought in gog or any other platform which properly provides installers.

[–] pe1uca@lemmy.pe1uca.dev 7 points 4 days ago (1 children)

Weird, it didn't ask using firefox and ublock origin.
I don't have all lists active tho.

4
submitted 6 days ago* (last edited 6 days ago) by pe1uca@lemmy.pe1uca.dev to c/quebec@lemmy.ca
 

Savez-vous si il y aura des problèmes avec cette situation?

J'ai juste rejeté l'augmentation du loyer pis la compagnie as envoyé le cas au TAL.
Si je fait une cession de bail, le prochain propriétaire auras des problèmes?
Je me demande au cas où cela pourrait effrayer les candidats possibles ou si j'aurai des problèmes.

Je crois aussi que la compagnie pourrait rejeter ces deux processus parce que les deux département sont à eux, c'est ça?

Que me recommandez-vous?

Mon département est 3 1/2, et le outre est 4 1/2, pour seulement ~$30 plus.

[–] pe1uca@lemmy.pe1uca.dev 10 points 1 week ago (1 children)

Why do you need the files in your local?
Is your network that slow?

I've heard of multiple content creators which have their video files in their NAS to share between their editors, and they work directly from the NAS.
Could you do the same? You'll be working with music, so the network traffic will be lower than with video.

If you do this you just need a way to mount the external directory, either with rclone or with sshfs.


The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time

I think this is a good strategy to not put additional stress in your drives (as a non-expert of NAS), but I've read the actual wear and tear of the drives is mostly during this process of spinning up and down. That's why NAS drives should be kept spinning all the time.
And drives specifically built for NAS setups are designed with this in mind.

[–] pe1uca@lemmy.pe1uca.dev 36 points 1 week ago (2 children)

There's a difference between water and liquid.

Not sure if the solid core has more mass than the mantle.
In any case, I'd say it's like a balloon with something solid floating in the middle.

[–] pe1uca@lemmy.pe1uca.dev 18 points 1 week ago

IIRC: webp webm file extensions, and VP8/VP9 video format.

[–] pe1uca@lemmy.pe1uca.dev 1 points 2 weeks ago (1 children)

IIRC they mentioned is next to impossible without actually processing the video and guessing when then ad stops on your client (since the ads will change per user, so it can't be done on a server for all users)

[–] pe1uca@lemmy.pe1uca.dev 13 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

Yes, most podcasts are hosted outside of your podcast player and distributed via RSS (even if this is Spotify which already hosts music).
So when a service has the podcast it means it lists the response from the RSS feed, but usually they just copy the text data, including the URL where the actual audio is stored.
This audio is served by whatever other service the creator of the podcast uses, which means you're a free user to that service even if you pay for Spotify, which means the wonderful benefit of ads.

And these are ads you can't block since they're included in the audio stream (yay! /s).
Podverse (the player I use) mentions this as an issue when creating clips of the podcasts because they can't know how much the timestamp has been offset by those ads, so your clip probably only sounds good to you.

[–] pe1uca@lemmy.pe1uca.dev 9 points 2 weeks ago

As long as you mean a landslide win by a party lead by a guy who said a religious charm was better during the pandemic than any medication, vaccine or any countermeasure, a guy who said "women deserve to go to heaven" when asked if he's feminist, a guy who has said all the power should be concentrated in the government, not in independent entities, a guy who said eolic turbines make the landscape ugly, and who made two big investments in refineries during his administration... Yeah, it's a good thing to see the left-wing in the power.

[–] pe1uca@lemmy.pe1uca.dev 25 points 2 weeks ago (4 children)

She IS AMLO's administration, there was no word from her before he said something about anything during her campaign.

AMLO had said since the beginning of his term he was going to disappear from the public to his state after today, but earlier this year he said he would come back if the circumstances demanded it, and just last month I think he said he will stay around.

I don't wish her luck, I wish México luck.

[–] pe1uca@lemmy.pe1uca.dev 8 points 3 weeks ago

I could even go further into saying: always test every change you make, do not assume the change has been made because you updated a file.

[–] pe1uca@lemmy.pe1uca.dev 4 points 1 month ago* (last edited 1 month ago)

I use rclone and duplicati depending on the needs of the backup.

For long term I use duplicati, it has a GUI and you can upload it to several places (mines are spread between e2 and drive).
You configure the backend, password for encryption, schedule, and version retention.

rclone, with the crypt submodule, you use it to mount your backups as am external drive, so you need to manually handle the actual copy of the data into it, plus versioning and retention.

 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

 

I was using SQL_CALC_FOUND_ROWS and SELECT FOUND_ROWS();
But this has been deprecated https://dev.mysql.com/doc/refman/8.0/en/information-functions.html#function_found-rows

The recommended way now is first to query with limit and then again without it selecting count(*).
My query is a bit complex and joins a couple of tables with a large number of records, which makes each select take up to 4 seconds, so my process now takes double the time compared to as I just keep using found rows.

How can I go back to just running the select a single time and still getting the total number of rows found without the limit?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1512941

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm trying to configure some NFC tags to automatically open an app, which is easy, just have to type the package name.
But I'm wondering how I can launch the app in a specific activity.

Specifically when I search for FitoTrack in my phone I get the option to launch the app directly into the workout I want to track, so I don't have to launch the app, click the FAB, click "Record workout" and then select the workout.
So I want to have a tag which will automatically launch this app into a specific workout.

How can I know what's the data I need to put into the tag to do this?

Probably looking at the code will give me the answer, but this won't apply to closed source apps, so is there a way to get all the ways all my installed apps can be launched?

 

I'm using https://github.com/rhasspy/piper mostly to create some audiobooks and read some posts/news, but the voices available are not always comfortable to listen to.

Do you guys have any recommendation for a voice changer to process these audio files?
Preferably it'll have a CLI so I can include it in my pipeline to process RSS feeds, but I don't mind having to work through an UI.
Bonus points if it can process the audio streams.

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

cross-posted from: https://lemmy.pe1uca.dev/post/1434359

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I was trying to debug an issue I have connecting to a NAS, so I was checking the logs of UFW and found out there are a lot of connections being blocked from my chromecast HD (AndroidTV) on different ports via the local IP.

Sometimes I use jellyfin, but that's over tailscale, so there shouldn't be any traffic over local IP, just over tailscale's IP.
But shouldn't have traffic right now since I wasn't using it and didn't have tailscale on.

The ports seem random, just sometimes they are tried two times back to back, but afterwards another random port is tried to be accessed.

After seeing this I enabled UFW in my daily machine and the same type of logs showed up.

So, do you guys know what could be happening here?
Why is chromecast trying to access random ports on devices in the same network?

 

I've only used ufw and just now I had to run this command to fix an issue with docker.
sudo iptables -I INPUT -i docker0 -j ACCEPT
I don't know why I had to run this to make curl work.

So, what did I exactly just do?
This is behind my house router which already has reject input from wan, so I'm guessing it's fine, right?

I'm asking since the image I'm running at home I was previously running it in a VPS which has a public IP and this makes me wonder if I have something open there without knowing :/

ufw is configured to deny all incoming, but I learnt docker by passes this if you configure the ports like 8080:8080 instead of 127.0.0.1:8080:8080. And I confirmed it by accessing the ip and port.

 

I mean, the price of the product is the same, I'm taking a loan for the duration of the credit but paying no interest?
What's the catch?
I can keep my money making a bit of interest instead of giving it right away and without increasing the price of what I was already planning to buy. When or why wouldn't I choose 0% credits?

 

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

view more: next ›