![]() |
#15
|
||||
|
||||
![]() Hmmm, just swo you know -- I don't think I'm the only other IT related person on here that would offer free advice
![]() I do work in the public sector or things, so I don't do consulting anymore -- which translates to some reasonable advise without trying to make a buck..... Anyways, I currently run a pretty large server setup (about 10 racks full at the moment). As we ONLY run open source solutions mostly due to cost, the solutions are kinda neat. The first off advise is this: If you are running your own servers, virtualize. This allow you to expand, move things, and as well as not waste on extra server until you really really need it. My current new and shiny setup -- pushing about 100G daily. 8 VM's total 8 cores/16 GB RAM 1) squid as a reverse proxy,load balancer -- handles 50% of the load at about 5% of 1 CPU, 256 MB RAM 2) bulk storage using NFS for load balanced apache 3,4,5) matching apache web servers 6) memcache server 128Mb RAM less tiny CPU usage 7) Mysql server -- on fast disk, everything tweaked for fast DB access 8) mysql slave -- allows read only access, and backup run from here without slowing down sites Backups --- we have a MASSIVE backup system, but it runs "rsbackup". we use ZFS on FreeBSD, with de-duplication, and filesystem level snapshots. The backup server calls to the NFS/DB servers and does an rsync that pulls only the changed data, then does a snapshot, then replicates the backs to a second FreeBSD box in a separate data center. Files are stored as a copy of the file system and are compressed by ZFS filesystem. Backups are small on size (only stores the diff of the files), snapshots happen in a few seconds, and we can roll any server back to any snapshot time. In a few cases the backups happen about every 10 min or so. Of course the backup server holds about 40Tb of storage, but it backs up about 90 servers as a full backup daily for about 1 year. (these cost us about $4000 each, but are a 5U case). Whew -- that is what I get for being a linux geek. BTW -- your slow transfers were due to SSH slow downs -- It doesn't perform well over long high speed links -- look into the HPN patches (10X speed). Quote:
|