cruzaderNO

joined 1 year ago
[–] cruzaderNO@alien.top 1 points 11 months ago (1 children)

It tends to go hand in hand with a nas with built in 10gbase-t switch.

Sfp+ tends to have a latency pitch in this segment, that ship has sailed with the nic behind a switch.

[–] cruzaderNO@alien.top 1 points 11 months ago (1 children)

The IOM6 units are just dumb expanders, they do not support zoning etc splitting of the shelf.
They just give direct access to all drives and thats it.

On paper you can split it by just connecting both and making sure you dont use the same drive for both servers.
(Would expect you to need the interposers if this is sata drives)
Have not done this myself but ive seen it done in multiple setups.

The cleaner approach imo would be a virtual truenas etc file server that runs on the host needing lowest latency and shares to the rest.

[–] cruzaderNO@alien.top 1 points 11 months ago (1 children)

The firebox m500 is solid to use for pfsense/opnsense.

Tripp-lite pdus are very nice if you they have the management card in them.

The 3560 i would not use due to consumption/age.

[–] cruzaderNO@alien.top 1 points 11 months ago

How small do you need to start?

And you need to define "FAST" with an actual number you need.

[–] cruzaderNO@alien.top 1 points 11 months ago

Power disable enables the host to power cycle the drive without power cycling itself or you pulling and re-seating the drive.

The downside is that when plugged into "legacy hardware" that does not support this feature you cant give it 3.3v.

Thats why you often see people taping a few pins on the drives power connector or using molex->sata adapter cables that does not power these pins.
Since you often get ultrastar enterprise drives with this feature when you shuck.

[–] cruzaderNO@alien.top 1 points 11 months ago (1 children)

These days a barebones r730 or dl380 gen9 will often be cheaper than a generic chinese x99 board. (Barebone/CTO is generaly only missing cpu/ram/storage)

Uses the same typical e5-2600v4/ddr4 with sas hba+nic usualy included since they are specific to server.

And you then get psus,case,heatsinks,fans etc also included.

Buying a board and building is the expensive route, its usualy not taken unless you got nowhere to stash the noisy rack option.

[–] cruzaderNO@alien.top 1 points 11 months ago

As a fellow Norwegian ive always just vented the air into the living area (with a sound trap after fan).

A few places ive lived this has been all my heating all year.

[–] cruzaderNO@alien.top 1 points 11 months ago

That sounds like a 12-15$ TV box.

Their downside is IO on network/storage, adding the cost of that and you start passing thin clients/minis in actual cost.

But with the arm option you are close to same consumption for cores that are 10% of the performance that cheap x86 would be.

[–] cruzaderNO@alien.top 2 points 11 months ago

Anything rack related or switches above lower end consumer is rarely on the sales.

If doing a consumer/whitebox build you might find some stuff tho.
Spinners for capacity storage tend to have good deals also.

[–] cruzaderNO@alien.top 1 points 11 months ago

Whoever sells asrockrack in your area?...

[–] cruzaderNO@alien.top 1 points 11 months ago

So at this point, I'm just looking for the rackmount solution.

And multiple has been posted, so problem solved then i guess.

[–] cruzaderNO@alien.top 1 points 11 months ago (2 children)

Strange setup for AI with the specs/chips usualy in the nucs/minis.

But unless you plan to strip the pcbs out of the case etc type stuff they will not be happy running very dense.

If you got a healthy budget id expect you to look towards the stuff made for rack and the usecase.

view more: ‹ prev next ›