💬 OT ðŸ’» Post your computer setup

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
Not really, but the bigger capacity increases throughput speeds via parallelization.

On the other hand.

WTF WD, Toshiba, and Seagate for slipping SMR drives into the NAS product lines.

SMR is shit if you need to rebuild an array.


And a follow up with an actual test of the SMR drives in a FreeNAS array.

https://www.servethehome.com/wd-red-s...

 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
Damn, that's really interesting. Really crappy stuff considering the performance implications.

Also, I'm not surprised. As a technology reaches the point near death it tends to falter. I remember floppy disks dying and there being severe, severe quality issues with them as that was happening.
This isn't a quality issue though. Its the method by which the data is stored. An SMR drive must carefully write the data since each bit is layered on top of one another like a roof shingle. Bad for large writes like during a rebuild. The drive slows down to awful speeds.
 

Mr_Roboto

Doing the jobs nobody wants to
TCG Premium
Feb 4, 2012
25,906
31,086
Nashotah, Wisconsin (AKA not Illinois)
This isn't a quality issue though. Its the method by which the data is stored. An SMR drive must carefully write the data since each bit is layered on top of one another like a roof shingle. Bad for large writes like during a rebuild. The drive slows down to awful speeds.

I would debate that. The reasoning is that they've taken measures to cut costs and in turn a critical dimension of the disk's utility has been compromised. rebuild times of disks is immensely important. The Enterprise Storage market went away from 7200 RPM disks in tier 1 RAID based systems (EMC and HDS) because the rebuild times would compromise reliability even using double parity in RAID 6. Instead if they were doing hybrid arrays they went to a combination of 10K disks and SSDs and right now they're all flash. It's a matter of time truthfully. I agree with the guy doing what they did was extremely ignorant and should probably result in a lawsuit by parties who experience data loss or lack of availibility.
 
  • Like
Reactions: Lord Tin Foilhat

Mr_Roboto

Doing the jobs nobody wants to
TCG Premium
Feb 4, 2012
25,906
31,086
Nashotah, Wisconsin (AKA not Illinois)
if we were talking hours...that is one thing...

but fucking over a WEEK difference in rebuild time!?! thats insane.

Anyone who deals with raids and degraded arrays knows thats a fucking LIFETIME of praying shit goes fine.

Consider that if the disk is being accessed even with the "good" technology and a 1 or 2TB drive (not a 4) it can take weeks as it is to rebuild.
 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
I would debate that. The reasoning is that they've taken measures to cut costs and in turn a critical dimension of the disk's utility has been compromised. rebuild times of disks is immensely important. The Enterprise Storage market went away from 7200 RPM disks in tier 1 RAID based systems (EMC and HDS) because the rebuild times would compromise reliability even using double parity in RAID 6. Instead if they were doing hybrid arrays they went to a combination of 10K disks and SSDs and right now they're all flash. It's a matter of time truthfully. I agree with the guy doing what they did was extremely ignorant and should probably result in a lawsuit by parties who experience data loss or lack of availibility.
Its not a manufacturing defect though the drive is working as intended. Its just the improper labelling and marketing of a drive using technology that is NOT suited to RAID.

This type of drive has no business in a RAID setup.

I have a 6 disk 8TB RaidZ2 array, and a 9 disk RaidZ3 4TB array. If I attempted to rebuild a either of these arrays with a SMR drive it would probably take more than a week to finish and all that while the load on the other disks would probably kill a 2nd disk.

The issue here is not that SMR exists, its that its being marketed in a NAS rated drive that are almost always setup in some kind of RAID setup.
 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
Consider that if the disk is being accessed even with the "good" technology and a 1 or 2TB drive (not a 4) it can take weeks as it is to rebuild.
How many disks you talking about though. A smaller disk array shouldn't take that long normally. My 6 and 9 disk arrays take about 8 hours max.

pool: Frigga
scan: scrub repaired 0 in 0 days 06:12:50 with 0 errors on Sat May 9 06:12:54 2020

pool: Odin
scan: scrub repaired 0 in 0 days 07:54:58 with 0 errors on Sun May 24 07:55:02 2020
 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
sure, but that doesnt NEED to be the case nowadays.
If you are in an enterprise, you should NOT be using WD Reds.

WD Reds are prosumer. Most people cannot afford the cost of an all flash array of any real capacity. SOHO is still going to be Spinning Rust, and likely not enterprise grade stuff.
 

Lord Tin Foilhat

TCG Conspiracy Lead Investigator
TCG Premium
Jul 8, 2007
60,725
56,889
Privy Chamber
If you are in an enterprise, you should NOT be using WD Reds.

WD Reds are prosumer. Most people cannot afford the cost of an all flash array of any real capacity. SOHO is still going to be Spinning Rust, and likely not enterprise grade stuff.
a flash array in my server would be more then triple what I paid for it with USED SSDs :rofl:

and that is only 8 drives
 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
a flash array in my server would be more then triple what I paid for it with USED SSDs :rofl:

and that is only 8 drives
I have an all Flash array. Its 6 256GB SSD's in RaidZ1. Its only 1.2TB and when I bought the disks it wasn't cheap. I don't need that much SSD storage though, so when I "upgrade" it, I'll probably just run 3x 1TB SSDs in RaidZ1, or 2 2TB in Mirrored.

The flash array is where I store the Plex DB, iTunes Library, and other frequently accessed small files. Everything else sits on Spinning Rust.
 

Mr_Roboto

Doing the jobs nobody wants to
TCG Premium
Feb 4, 2012
25,906
31,086
Nashotah, Wisconsin (AKA not Illinois)
How many disks you talking about though. A smaller disk array shouldn't take that long normally. My 6 and 9 disk arrays take about 8 hours max.

pool: Frigga
scan: scrub repaired 0 in 0 days 06:12:50 with 0 errors on Sat May 9 06:12:54 2020

pool: Odin
scan: scrub repaired 0 in 0 days 07:54:58 with 0 errors on Sun May 24 07:55:02 2020

Depends on the system. Could be dozens could be as few as six. The difference is how often the disk is getting hit. A typical 7200 RPM disk is only rated to about 80 IOPS mixed IO. Linear IO operations give "hero" numbers especially if you short stroke the disk. Even if you're directing say 1-3% of IO to those disks you could over run them when you're running tens of thousands of IOPs on an array and getting that much traffic. Definitely different than a lot of the prosumer space where the disks just sit and chill most of the time.

If you are in an enterprise, you should NOT be using WD Reds.

WD Reds are prosumer. Most people cannot afford the cost of an all flash array of any real capacity. SOHO is still going to be Spinning Rust, and likely not enterprise grade stuff.

Believe it or not Pure Storage Arrays use consumer grade Samsung SSDs. Dead serious. The architecture accomodates that (see previous remark about non-RAID algos.) I've seen Blacks in traditional Enterprise mid range NASes as well. I wouldn't be so quick as to discount a company's inclination to save a few dollars.

I would be curious to see if these actually worked properly in the "service provider" space as he talks about. Since the writes seem to be the limitations I doubt technologies they'd use like Ceph (actually ZFS *is* one of those technologies as well.) We are installing an enterprise grade Sun/Oracle ZFS Appliance that's hundreds of TB of raw capacity at this moment it's a pretty cool and mature technology. It's more than "prosumer."
 

Lord Tin Foilhat

TCG Conspiracy Lead Investigator
TCG Premium
Jul 8, 2007
60,725
56,889
Privy Chamber
Well got my NAS transported over from a full tower to a 4U rackmount case... Went from taking up 14u to 4 :rofl: so much more room in the rack now.


But 2 new problems...

-CPU fan is too tall for new case, so have to swap back to my other shorter, wider one... probably do that when the noctua fans come in Friday. Stock ones that came with the case moved a ton of air but loud as fuck.

-new rack cable management sticks out too far so I am unable to close the door fully and now I need to un-rack everything and shift all the mounting points backwards...fucking shit.

Last project is to backup the server, swap all my 300GB drives to 900GB, rebuild the array and restore... Very nice increase but probably a nights worth of work.
 

Thirdgen89GTA

Aka "That Focus RS Guy"
TCG Premium
Sep 19, 2010
19,398
16,019
Rockford
Real Name
Bill
So I run the Red drives in RAID5 and it's time for me to buy another. This smr stuff is scaring me a bit. I was thinking about buying this instead.
Amazon product

Is it bad ju ju to mix 5400 rpm and 7200 rpm drives in RAID?

The raid will perform at the lowest common denominator.

basically the rest of the drives will wait on the slowest drive.
 
  • Like
Reactions: Lord Tin Foilhat

GTvert90

TCG Elite Member
TCG Premium
Jun 10, 2016
1,370
1,345
Machesney Park
Real Name
Phil
The raid will perform at the lowest common denominator.

basically the rest of the drives will wait on the slowest drive.
That's what I figured but I just wanted to make sure.

Is it possible to use say a 6tb drive partition it to 4tb and use it in an array with other 4tb drives? I really should reduce the number of drives and increase their size as this will be my 9th data disk. But I don't want to spend the money on that
 

Lord Tin Foilhat

TCG Conspiracy Lead Investigator
TCG Premium
Jul 8, 2007
60,725
56,889
Privy Chamber
That's what I figured but I just wanted to make sure.

Is it possible to use say a 6tb drive partition it to 4tb and use it in an array with other 4tb drives? I really should reduce the number of drives and increase their size as this will be my 9th data disk. But I don't want to spend the money on that
Yes you can.
 

Fish

From the quiet street
TCG Premium
Aug 3, 2007
40,578
7,992
Hanover Park
Real Name
Fish
I found this while trying to research what this means while using unRaid.

aI6Lx6c.png


Came from here.



My WD Red 4TB is an EFRX and not an EFAX. EFAX is the SMR drive and EFRX is the CMR My one 3TB drive is a Seagate and its not on that list, but I did google the model number and there are lawsuits about how unreliable it is. RuhRoah. LOL. 2 of my 8TB drives are Ironwolf drives which are CMR and one is a WD white label I pulled from an external.
 

Thread Info