Join our guest, Vale Inco, worldwide leading producer of nickel, and Scalar for an informative session providing you insight on how to:
•Automate data management tasks to free up IT resources and eliminate downtime
•Get better utilization out of your storage resources
•Utilize storage policies to better manage and optimize use of storage devices
•Easily add and manage storage policies for all devices from a single management console
•Reduce overall storage costs by 50 to 80%
•Cut migration times by up to 90% with zero impact to users during migration
•Reduce backup times and costs by up to 90%
21. Scalar designs, deploys and manages the entire IT stack including eco considerationsSystem Implementation Capacity Planning Health Checks Storage and System Consolidation Converged Network Infrastructure
22.
23. Multiple data centre hosting facilities, plus full remote management offerings at customer sites
37. Most companies are still managing growth reactively. Where do you put new data when your filesystems fill up?
38. If you aren’t able to dynamically increase the size of a file system (pooling, thin provisioning, etc), how do you move data between filesystems/servers without impacting users?
39. When you need to increase capacity, how long does it usually take to acquire, deploy and provision it? Do you play the data “shell” game until its ready?
40.
41. In high file count environments, you have a metadata problem, not a data problem.
42. Lots of small files complicate management strategies
43. Archiving, while one strategy to address data growth actually increases file counts (stubs), creating more of a problem
44. Backup and recovery of high file count filesystems are complex – “walking a filesystem” is usually an order of magnitude more time consuming than actually moving the data.
45.
46. File system backups are sequential (one job per filesystem) and take time. Multiple filesystems create management headaches.
47. Full backups of large amounts of data takes time and chew up resources (either D2D, Tape, or Dedupe).
50. A 72hr backup job can typically be 95% metadata processing and 5% data movement.
51.
52. Storage is typically on a three year life cycle – which generally means four, if you account for migration in and migration out
53. How do you migrate large volumes of data between old and new storage platforms without impacting users?
54. How do you migrate between different types of technologies? I.e., NetApp to EMC, EMC to BlueArc, Windows/UNIX to NAS?
55.
56. Managing multiple solutions is typical with unstructured data – UNIX (NFS) and Windows (CIFS) typically coexist. NAS appliances or gateways come into play when UNIX/Windows can’t scale
57. Having multiple protocols across multiple fileshares, on multiple servers/NAS solutions creates management complexity. Ensuring that each platform can grow/scale to meet demand is difficult to predict, and requires different strategies for managing growth
58.
59. Scale-up strategies leverage the same server/NAS platform by adding capacity. This minimizes management overhead, assuming that filesystems can dynamically be scaled online.
60. This assumes that the existing system can sustain performance growth too
61. Scale-out strategies couple storage capacity with performance, ideally using the same building block for consistency. This is predictable, creates allocation issues
62. Can a single fileshare span multiple device? How is data and performance distributed?
63.
64. How do you manage capacity, when data on different devices grows at different rates?
65. How do you manage performance, when access patterns are unpredictable?
66. Is it possible to redistribute content between filesystems and devices to “optimize” utilization? How does this impact users?
123. A Global NameSpace (GNS) has the unique ability to aggregate disparate and remote network based file systems, providing a consolidated view that can greatly reduce complexities of localized file management and administration. For example, prior to file system namespace consolidation, two servers exist and each represent their own independent namespaces; e.g. server1hare1 & server2hare2. Various files exist within each share respectively, however users have to access each namespace independently. This becomes an obvious challenge as the number of namespaces grows within an organization.
124.
125.
126. Make the best use of your current and future storage capacity
127. Eliminate the need to manually rebalance data – use automated, policy driven tools instead
128.
129.
130. Transitioning from one generation of technology to another is now a scheduled task, not a 6 month project
131. Keep your vendors competitive – without the pain of data migration projects, your choice of solution comes down to features and costs. Why pay more by being locked in?
132.
133.
134. Keep current data on faster, regularly backed up storage, while segregating static, older content that isn’t changing to lower tiers
135. Eliminate backup of over 80% of your data by cycling it out of the regular backup scheme
144. No client reconfiguration – with a virtualized, global name space, the location of data is policy and administrator controlled. Moving data around does not impact access to it.
145. Move entire file systems or individual files around without interrupting access to them.
146. Reduce the overhead of migration projects with a streamlined, consistent, automated solution.
147.
148. Reduce your storage costs by putting data on the right (cost) tier of storage – automated, policy driven.
149. Reduce your backup volumes dramatically be moving aged data out of the daily/weekly/monthly backup cycle. Back static data up once a quarter or less, with proper retention practices.
164. Our Problem(s) Extremely large volumes of data growing out of control Millions of files, many of them under 1k in size Aging End Of Life Data Archiving solution 5 day backup times Backups were running during business hours Small change windows to take outages in 24 hour operation that does not like down time.
165. The Solution Two pair of ARX 4000’s 1 pair in our Primary DC 1 pair in our Largest Site
166. How We Used the ARX Tier 1 Tier 2 5 TB 3 TB 2.5 TB 4 TB
167. The Results Backup Times 98 hours went to 28 hours 5 streams have been turned into 14 streams 4 of witch only happen once a month In primary DC backup times went from 110 hours for 1 full backup to 21 hours over 5 streams for the same full Archiving has been undone in one site and under way in the other Re-archive based on change through tiering All data moves were done during business hours without impacting user data access
168. Some Bonus Results Tape usage has gone down thanks to tiering Data types can be isolated Old systems still accessing network storage surface Strange connections get identified MP3 library gets a boost !
171. We’ll look at all aspects of your data environment (online, nearline, offline), processes, and applications, and provide guidance on how to get from “current state” to your desired “future state” given the challenges specific to your business.
179. We will present the results of the analysis, along with recommendations on how to realize the benefits of file virtualization
180. A mapping of benefits to your specific challenges will help build an ROI/TCO for business justification
181.
182.
183. Justification through soft and hard dollar cost savings will be presented to help establish a business case for deployment in your environment
184. CostsFree to Session Attendees Because we believe this solution can be proven out as a cost effective, highly impactful way of managing unstructured data growth, we are presenting this 2 ½ day engagement free of charge.
185.
186. Learn how file virtualization can benefit your environment
Hassle-free access to the technologies you need21 vendors’ products on display with remote accessProduct demonstrations and hands-onCustomer Proof-of-Concepts in person or via remote connectionsInteroperability Testing between servers, networks and storageAccess to direct vendor assistance as neededConvenient downtown Toronto location near Yonge and KingEvents, tours and special requestsEMCAvamar and Data Domain – in our lab, with site-to-site replication between here and Vancouver office Available for product demonstrations, evaluations and POC’s.Scalar Labs also hosts bi-weekly training sessions for our customers on Fridays over lunch no charge to participate technical topics – no sales / marketing material View the schedule and register on www.scalar.ca