System Manager 2.0 enables you to manage multiple storage systems and storage elements such as disks, volumes, and aggregates. It provides a Web-based graphical management interface to manage common storage system functions from a Web browser.
You can perform tasks such as:
- Configure and manage storage elements such as disks, aggregates, volumes, LUNs, qtrees, and quotas.
- Configure protocols such as CIFS and NFS and provision file sharing.
- Configure protocols such as FC and iSCSI for block access.
- Create vFiler units and manage them.- Set up SnapMirror relationships and manage SnapMirror tasks.- Manage HA configurations and perform takeover and giveback.
- Monitor and manage your storage systems.
Note: The Public Beta program is open to all existing NetApp customers. The Beta version of NetApp System Manager 2.0 software has to be installed only on non-production environments. Beta has a limited support on NetApp communities only.
http://communities.netapp.com/groups/netapp-system-manager-20-public-beta
NetApp System Manager 2.0 Tutorials
http://communities.netapp.com/docs/DOC-10703
Tuesday, August 2, 2011
Monday, June 20, 2011
Disk drive in BYP (disk bypass) status
What are Drive Bypass Events?
fcal link_stats report BYP status.
Example:
localhost> fcstat device_map
Loop Map for channel 4a:
Translated Map: Port Count 28
7 29 27 26 25 24 23 22 21 20 19 18 17 16 45 44
43 42 41 40 39 38 37 36 35 34 33 32
Shelf mapping:
Shelf 1: 29 BYP 27 26 25 24 23 22 21 20 19 18 17 16
Shelf 2: 45 44 43 42 41 40 39 38 37 36 35 34 33 32
This means that the disk id 28 has been bypassed.
Drive bypass events are situations which cause the ESH to bypass a drive port thus making it inaccessible by the host and isolating the drive from the loop. There are primarily two different kinds of drive Bypass Events:
Threshold Bypasses - Situations where the ESH detects that a certain kind of error is occurring for a specific period of time and the determination is made to bypass the port because of accumulated errors over time or over the amount of data flowing through the loop.
Policy Bypasses - Situations where the ESH detects a critical problem and bypasses a drive in order to maintain the integrity of the loop. This might be a "situational" problem which later clears, but once a drive is bypassed, it will remain bypassed until manually cleared by removing and reseating the drive, or by power cycling of the shelf.
Self Bypasses - The hard drive electronics determine that the internal circuitry cannot function properly anymore. Thus the drive itself calls to be removed from the loop.
fcal link_stats report BYP status.
Example:
localhost> fcstat device_map
Loop Map for channel 4a:
Translated Map: Port Count 28
7 29 27 26 25 24 23 22 21 20 19 18 17 16 45 44
43 42 41 40 39 38 37 36 35 34 33 32
Shelf mapping:
Shelf 1: 29 BYP 27 26 25 24 23 22 21 20 19 18 17 16
Shelf 2: 45 44 43 42 41 40 39 38 37 36 35 34 33 32
This means that the disk id 28 has been bypassed.
Drive bypass events are situations which cause the ESH to bypass a drive port thus making it inaccessible by the host and isolating the drive from the loop. There are primarily two different kinds of drive Bypass Events:
Threshold Bypasses - Situations where the ESH detects that a certain kind of error is occurring for a specific period of time and the determination is made to bypass the port because of accumulated errors over time or over the amount of data flowing through the loop.
Policy Bypasses - Situations where the ESH detects a critical problem and bypasses a drive in order to maintain the integrity of the loop. This might be a "situational" problem which later clears, but once a drive is bypassed, it will remain bypassed until manually cleared by removing and reseating the drive, or by power cycling of the shelf.
Self Bypasses - The hard drive electronics determine that the internal circuitry cannot function properly anymore. Thus the drive itself calls to be removed from the loop.
Thursday, May 19, 2011
EMC VNX/ Isilon Spec Sheet
VNXe series -- http://www.emc.com/products/series/vnxe-series.htm#/1
VNX series -- http://www.emc.com/products/series/vnx-series.htm#/1
VNX Gateway -- http://www.emc.com/products/series/vnx-series-gateways.htm#/1
Isilon S-series -- http://www.isilon.com/s-series
Isilon X-series -- http://www.isilon.com/x-series
Isilon NL-series -- http://www.isilon.com/nl-series
VNX series -- http://www.emc.com/products/series/vnx-series.htm#/1
VNX Gateway -- http://www.emc.com/products/series/vnx-series-gateways.htm#/1
Isilon S-series -- http://www.isilon.com/s-series
Isilon X-series -- http://www.isilon.com/x-series
Isilon NL-series -- http://www.isilon.com/nl-series
General Availability of Data ONTAP 8.0.1 7-Mode
Supported Platforms
Data ONTAP 8.0.1 7-Mode GA release is supported on the following platforms:
FAS/V62x0
FAS/V60x0
FAS/V32x0
FAS/V31x0
FAS/V3070
FAS/V3040
FAS2040
SA series
https://now.netapp.com/NOW/products/cpc/cpc1101-04.shtml
Wednesday, April 27, 2011
NetApp - Changing Gateway Settings
1. SSH to controller 1.
2. Type 'rdfile /etc/rc'
3. Copy the contents of the /etc/rc output to notepad. (Might want to save this to a file for safe keeping)
1. Example
1.hostname bob
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 partner e0a
ifconfig e0b `hostname`-e0b mediatype auto flowcontrol full partner e0b
ifconfig e0c `hostname`-e0c mediatype auto flowcontrol full partner e0c
route add default 1.1.1.1 1
routed on
options dns.domainname bob.local
options dns.enable on
options nis.enable off
savecore
4. Change the notepad contents for the default route statement.
1. Example
1.hostname bob
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 partner e0a
ifconfig e0b `hostname`-e0b mediatype auto flowcontrol full partner e0b
ifconfig e0c `hostname`-e0c mediatype auto flowcontrol full partner e0c
route add default 2.2.2.2 1
routed on
options dns.domainname bob.local
options dns.enable on
options nis.enable off
savecore
5. Make sure you get this right or your system won't boot. Copy the contents of the modified file to your clipboard.
6. In the SSH session type 'wrfile /etc/rc'
7. Paste the contents of your buffer into the SSH session.
8. Press Ctrl+C to quit the editor.
9. Verify, type 'rdfile /etc/rc' and make sure there are no extra line breaks or truncations.
10. Type 'route delete default'
11. Type 'route add default 2.2.2.2 1'
12. SSH to controller 2.
13. Repeat steps abvove
2. Type 'rdfile /etc/rc'
3. Copy the contents of the /etc/rc output to notepad. (Might want to save this to a file for safe keeping)
1. Example
1.hostname bob
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 partner e0a
ifconfig e0b `hostname`-e0b mediatype auto flowcontrol full partner e0b
ifconfig e0c `hostname`-e0c mediatype auto flowcontrol full partner e0c
route add default 1.1.1.1 1
routed on
options dns.domainname bob.local
options dns.enable on
options nis.enable off
savecore
4. Change the notepad contents for the default route statement.
1. Example
1.hostname bob
ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full netmask 255.255.255.0 partner e0a
ifconfig e0b `hostname`-e0b mediatype auto flowcontrol full partner e0b
ifconfig e0c `hostname`-e0c mediatype auto flowcontrol full partner e0c
route add default 2.2.2.2 1
routed on
options dns.domainname bob.local
options dns.enable on
options nis.enable off
savecore
5. Make sure you get this right or your system won't boot. Copy the contents of the modified file to your clipboard.
6. In the SSH session type 'wrfile /etc/rc'
7. Paste the contents of your buffer into the SSH session.
8. Press Ctrl+C to quit the editor.
9. Verify, type 'rdfile /etc/rc' and make sure there are no extra line breaks or truncations.
10. Type 'route delete default'
11. Type 'route add default 2.2.2.2 1'
12. SSH to controller 2.
13. Repeat steps abvove
Wednesday, April 20, 2011
Netapp Snapvault : Transfer aborted: destination qtree is not coalesced
If you are working with NetApp SnapVault backups then at times you'll experience one of the issue with "qtree is not coalesced". SnapVault update command will fail unless you poke the backup relationship with "Snapvault start " as below: (this will re-initiate the backup from its most recent checkpoint)
filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree Uninitialized - Idle with restart checkpoint (at 196 GB)
[root@cherrytop etc]# rsh dr_filer snapvault update dr_filer:/vol/dest_vol/dest_qtree
Transfer aborted: destination qtree is not coalesced.
[root@cherrytop etc]# rsh dr_filer snapvault start -S filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree
Transfer started.
Monitor progress with 'snapvault status' or the snapmirror log.
filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree Uninitialized - Transferring (196 GB done)
filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree Uninitialized - Idle with restart checkpoint (at 196 GB)
[root@cherrytop etc]# rsh dr_filer snapvault update dr_filer:/vol/dest_vol/dest_qtree
Transfer aborted: destination qtree is not coalesced.
[root@cherrytop etc]# rsh dr_filer snapvault start -S filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree
Transfer started.
Monitor progress with 'snapvault status' or the snapmirror log.
filer:/vol/source_vol/source_qtree dr_filer:/vol/dest_vol/dest_qtree Uninitialized - Transferring (196 GB done)
Wednesday, April 13, 2011
Bits and Bytes Calculator
I found this really useful when you have many remote sites and DR used alongside with backup solution.
http://www.speedguide.net/conversion.php
It also includes bandwidth reference table.
http://www.speedguide.net/conversion.php
It also includes bandwidth reference table.
Sunday, March 13, 2011
When do we use the bye, halt, and reboot commands?
-- The bye command works at the ok prompt (ok>) to reboot the filer
-- The halt command works at the at filer prompt, it terminates all services (example: CIFS) and brings the filer to the ok prompt (ok>)
-- The reboot command works at the filer prompt (filer1>) to reboot the filer
Using the halt and bye command
cherrytop#> halt
CIFS local server is shutting down...
CIFS local server has shut down...
Tue Aug 3 15:57:01 PDT [kern.shutdown:notice]: System shut down because : "halt".
Tue Aug 3 15:57:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown
Program terminated
ok reboot (shows that reboot only works from the filer prompt)
reboot ?
ok bye
Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Es
Memory size is 512 MB
Testing SIO
Testing LCD
Probing devices
Testing 512MB
256 to 320MB
Skipping further tests
Complete
Finding image...
Loading isa floppy
Floppy not first boot disk
No floppy disk found.
Booting from fcal
Loading /pci2/fcal@7/disk@10
100%
Starting Press CTRL-C for special boot menu
Using the reboot command
cherrytop#> bye (shows that bye only works at the ok prompt)bye not found. Type '?' for a list of commands cherrytop#> reboot
CIFS local server is shutting down...
CIFS local server has shut down...
Tue Aug 3 16:46:01 PDT [kern.shutdown:notice]: System shut down because : "rebo
ot".
Tue Aug 3 16:46:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown
Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Esc to skip POST
Memory size is 512 MB
Note:In a clustered configuration, running the "reboot" command on a node does not cause a CF takeover implicitly. It starts a 90 second counter that allows the rebooted node time to come back online before attempting a takeover. This timer can be monitored with "cf status" while the partner node is rebooting.
-- The halt command works at the at filer prompt, it terminates all services (example: CIFS) and brings the filer to the ok prompt (ok>)
-- The reboot command works at the filer prompt (filer1>) to reboot the filer
Using the halt and bye command
cherrytop#> halt
CIFS local server is shutting down...
CIFS local server has shut down...
Tue Aug 3 15:57:01 PDT [kern.shutdown:notice]: System shut down because : "halt".
Tue Aug 3 15:57:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown
Program terminated
ok reboot (shows that reboot only works from the filer prompt)
reboot ?
ok bye
Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Es
Memory size is 512 MB
Testing SIO
Testing LCD
Probing devices
Testing 512MB
256 to 320MB
Skipping further tests
Complete
Finding image...
Loading isa floppy
Floppy not first boot disk
No floppy disk found.
Booting from fcal
Loading /pci2/fcal@7/disk@10
100%
Starting Press CTRL-C for special boot menu
Using the reboot command
cherrytop#> bye (shows that bye only works at the ok prompt)bye not found. Type '?' for a list of commands cherrytop#> reboot
CIFS local server is shutting down...
CIFS local server has shut down...
Tue Aug 3 16:46:01 PDT [kern.shutdown:notice]: System shut down because : "rebo
ot".
Tue Aug 3 16:46:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown
Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Esc to skip POST
Memory size is 512 MB
Note:In a clustered configuration, running the "reboot" command on a node does not cause a CF takeover implicitly. It starts a 90 second counter that allows the rebooted node time to come back online before attempting a takeover. This timer can be monitored with "cf status" while the partner node is rebooting.
Saturday, March 5, 2011
Data ONTAP 8.0.1 7-Mode and later releases - enhancements
Data ONTAP 8.0.1 7-Mode and later releases provide improved performance, resiliency, and management capabilities for storage resources.
Support for 64-bit aggregates:
In Data ONTAP 8.0 7-Mode and later releases, Data ONTAP supports a new type of aggregate, the 64-bit aggregate. 64-bit aggregates have a larger maximum size than aggregates created with earlier versions of Data ONTAP. In addition, volumes contained by 64-bit aggregates have a larger maximum size than volumes contained by aggregates created with earlier versions of Data ONTAP.
Upgrading to Data ONTAP 8.0.1 increases volume capacity after deduplication:
After you upgrade to Data ONTAP 8.0.1 and later from an earlier release of Data ONTAP, you can increase the deduplicated volume size up to 16 TB.
Maximum simultaneous FlexClone file or FlexClone LUN operations per storage system:
Starting with Data ONTAP 8.0 7-Mode, you can simultaneously run a maximum of 500 FlexClone file or FlexClone LUN operations on a storage system.
File space utilization report:
The file space utilization report enables you to see the files and the amount of space that they occupy in a deduplicated volume. You can choose to either move or delete the files to reclaim the space.
Increased maximum RAID group size for SATA disks:
Starting in Data ONTAP 8.0.1, the maximum raid group size allowed for ATA, BSAS, and SATA disks has increased from 16 to 20 disks. The default size remains the same, at 14 disks.
FlexClone files and FlexClone LUNs support on MultiStore:
In Data ONTAP 7.3.3 and later releases of the 7.x release family, and in Data ONTAP 8.0.1 and later, the FlexClone files and FlexClone LUNs commands are available in the default and nondefault vfiler contexts.
Support for 64-bit aggregates:
In Data ONTAP 8.0 7-Mode and later releases, Data ONTAP supports a new type of aggregate, the 64-bit aggregate. 64-bit aggregates have a larger maximum size than aggregates created with earlier versions of Data ONTAP. In addition, volumes contained by 64-bit aggregates have a larger maximum size than volumes contained by aggregates created with earlier versions of Data ONTAP.
Upgrading to Data ONTAP 8.0.1 increases volume capacity after deduplication:
After you upgrade to Data ONTAP 8.0.1 and later from an earlier release of Data ONTAP, you can increase the deduplicated volume size up to 16 TB.
Maximum simultaneous FlexClone file or FlexClone LUN operations per storage system:
Starting with Data ONTAP 8.0 7-Mode, you can simultaneously run a maximum of 500 FlexClone file or FlexClone LUN operations on a storage system.
File space utilization report:
The file space utilization report enables you to see the files and the amount of space that they occupy in a deduplicated volume. You can choose to either move or delete the files to reclaim the space.
Increased maximum RAID group size for SATA disks:
Starting in Data ONTAP 8.0.1, the maximum raid group size allowed for ATA, BSAS, and SATA disks has increased from 16 to 20 disks. The default size remains the same, at 14 disks.
FlexClone files and FlexClone LUNs support on MultiStore:
In Data ONTAP 7.3.3 and later releases of the 7.x release family, and in Data ONTAP 8.0.1 and later, the FlexClone files and FlexClone LUNs commands are available in the default and nondefault vfiler contexts.
Create a new aggregate while zeroing spare disks
cherry-top# aggr create aggr1 -r 14 -d 0a.16 0a.17 0a.19 0a.22 0a.23 0a.24 0a.25 0a.26 0a.27 0a.28 0a.29 0a.32 0a.33 0a.34
aggregate has been created with 11 disks added to the aggregate. 3 more disks need
to be zeroed before addition to the aggregate. The process has been initiated
and you will be notified via the system log as the remaining disks are added.
Note however, that if system reboots before the disk zeroing is complete, the
volume won't exist.
cherry-top# vol status -s
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0a.35 0a 2 3 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.36 0a 2 4 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.37 0a 2 5 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.38 0a 2 6 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.39 0a 2 7 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.40 0a 2 8 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.41 0a 2 9 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.42 0a 2 10 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.43 0a 2 11 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.44 0a 2 12 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.45 0a 2 13 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
cherry-top# aggr status -v
Aggr State Status Options
aggr1 creating raid_dp, aggr nosnap=off, raidtype=raid_dp,
initializing raidsize=14,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=off,
lost_write_protect=off
Volumes:
Plex /aggr1/plex0: offline, empty, active
aggregate has been created with 11 disks added to the aggregate. 3 more disks need
to be zeroed before addition to the aggregate. The process has been initiated
and you will be notified via the system log as the remaining disks are added.
Note however, that if system reboots before the disk zeroing is complete, the
volume won't exist.
cherry-top# vol status -s
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0a.35 0a 2 3 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.36 0a 2 4 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.37 0a 2 5 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.38 0a 2 6 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.39 0a 2 7 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.40 0a 2 8 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.41 0a 2 9 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.42 0a 2 10 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.43 0a 2 11 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.44 0a 2 12 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.45 0a 2 13 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
cherry-top# aggr status -v
Aggr State Status Options
aggr1 creating raid_dp, aggr nosnap=off, raidtype=raid_dp,
initializing raidsize=14,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=off,
lost_write_protect=off
Volumes:
Plex /aggr1/plex0: offline, empty, active
Friday, March 4, 2011
Cluster giveback cancelled or waiting
Cluster node is down and waiting for giveback. Trying to perform a cf giveback from the partner which has taken over generates the following error message:
filer(takeover)> cf giveback
filer(takeover)> Thu Dec 21 21:01:55 EST [filer (takeover): cf_main:error]: Backup/restore services: There are active backup/restore sessions on the partner.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.misc.operatorGiveback:info]:
Cluster monitor: giveback initiated by operator
Thu Dec 21 21:01:55 EST [filer (takeover): snapmirror.givebackCancel:error]: SnapMirror currently transferring or in-sync, cancelling giveback.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: snapmirror: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: dump/restore: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.fm.givebackCancelled:warning]: Cluster monitor: giveback cancelled
To resolve, make sure the following outputs has no existing relationships or active sessions.
# snapmirror status -l
# snapvault status
# ndmpd status (kill any active sessions)
Issue cf giveback on the system after confirming all the above steps are verified.
filer(takeover)> cf giveback
filer(takeover)> Thu Dec 21 21:01:55 EST [filer (takeover): cf_main:error]: Backup/restore services: There are active backup/restore sessions on the partner.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.misc.operatorGiveback:info]:
Cluster monitor: giveback initiated by operator
Thu Dec 21 21:01:55 EST [filer (takeover): snapmirror.givebackCancel:error]: SnapMirror currently transferring or in-sync, cancelling giveback.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: snapmirror: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: dump/restore: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.fm.givebackCancelled:warning]: Cluster monitor: giveback cancelled
To resolve, make sure the following outputs has no existing relationships or active sessions.
# snapmirror status -l
# snapvault status
# ndmpd status (kill any active sessions)
Issue cf giveback on the system after confirming all the above steps are verified.
Sunday, February 20, 2011
World’s first shipping 7200 RPM 3TB enterprise-class HDD from Hitachi
The Hitachi Ultrastar™ 7K3000 is the world’s first and only 7200 RPM hard drive rated at 2.0 million hours MTBF and backed by a five-year limited warranty. The Ultrastar 7K3000 represents the fifth-generation Hitachi 5-platter mechanical design, first introduced in 2004, and has been field proven by top server and storage OEMs as well as leading Internet giants. When the highest quality and reliability are a top requirement, customer field data proves that the Ultrastar 7K3000 delivers by reducing downtime, eliminating service calls and keeping TCO to a minimum. Engineered for the highest reliability, the Ultrastar 7K3000 is not only put through grueling design tests during development but must also pass stringent ongoing reliability testing during manufacturing. Across the entire Ultrastar family, world-class quality control, combined with scientific root-cause analysis and multi-faceted corrective actions, ensure that Hitachi GST remains the recognized leader in quality and reliability for enterprise-class hard drives.
Highlights:
• 2.0 million hours MTBF
• Up to 3 terabytes of capacity
• 6Gb/s SATA and 6Gb/s SAS models for configuration flexibility
• Dual Stage Actuator (DSA) and Enhanced Rotational Vibration Safeguard (RVS) for robust performance in multi-drive environments
• 24x7 accessibility for enterprise-class, capacity-optimized applications
• 5-year limited warranty
Product Documentation : http://www.hitachigst.com/tech/techlib.nsf/products/Ultrastar_7K3000
Saturday, February 12, 2011
Minimum size for root FlexVol volumes
Storage system model Minimum root FlexVol volume size
FAS250 9 GB
FAS270 10 GB
FAS920 12 GB
FAS940 14 GB
FAS960 19 GB
FAS980 23 GB
FAS3020 12 GB
FAS3050 16 GB
F87 8 GB
F810 9 GB
F825 10 GB
F840 13 GB
F880 13 GB
R100-12TB 13 GB
R100-24TB 19 GB
R100-48TB 30 GB
R100-96TB 53 GB
R150 19 GB
R200 19 GB
Thursday, January 27, 2011
All about /etc/rc file
The /etc/rc file contains commands that the storage system executes at boot time to configure the system.
What startup commands do
Startup commands are placed into the /etc/rc file automatically after you run the setup command or the Setup Wizard.
Commands in the /etc/rc file configure the storage system to
1) Communicate on your network
2) Use the NIS and DNS services
3) Save the core dump that might exist if the storage system panicked before it was booted
Startup commands are placed into the /etc/rc file automatically after you run the setup command or the Setup Wizard.
Commands in the /etc/rc file configure the storage system to
1) Communicate on your network
2) Use the NIS and DNS services
3) Save the core dump that might exist if the storage system panicked before it was booted
Default /etc/rc file contents
To understand the commands used in the /etc/rc file on the root volume, examine the following sample /etc/rc file, which contains default startup commands:
To understand the commands used in the /etc/rc file on the root volume, examine the following sample /etc/rc file, which contains default startup commands:
#Auto-generated /etc/rc
hostname filerA
ifconfig e0 `hostname`-0
ifconfig e1 `hostname`-1
ifconfig a0 `hostname`-a0
ifconfig a1 `hostname`-a1
route add default MyRouterBox
routed on
savecore
Explanation of default /etc/rc contents
Description : hostname filerA
Sets the storage system host name to "filerA."
Description :
ifconfig e0 `hostname`-0
ifconfig e1 `hostname`-1
ifconfig a0 `hostname`-a0
ifconfig a1 `hostname`-a1
Sets the IP addresses for the storage system network interfaces with a default network mask.
The arguments in single backquotes expand to "filerA" if you specify "filerA" as the host name during setup. The actual IP addresses are obtained from the /etc/hosts file on the storage system root volume. If you prefer to have the actual IP addresses in the /etc/rc file, you can enter IP addresses directly in /etc/rc on the root volume.
Sets the IP addresses for the storage system network interfaces with a default network mask.
The arguments in single backquotes expand to "filerA" if you specify "filerA" as the host name during setup. The actual IP addresses are obtained from the /etc/hosts file on the storage system root volume. If you prefer to have the actual IP addresses in the /etc/rc file, you can enter IP addresses directly in /etc/rc on the root volume.
Description: route add default MyRouterBox
Specifies the default router. You can set static routes for the storage system by adding route commands to the /etc/rc file. The network address for MyRouterBox must be in /etc/hosts on the root volume.
Specifies the default router. You can set static routes for the storage system by adding route commands to the /etc/rc file. The network address for MyRouterBox must be in /etc/hosts on the root volume.
Description : routed on
Starts the routing daemon.
Starts the routing daemon.
Description: savecore
Saves the core file from a system panic, if any, in the /etc/crash directory on the root volume. Core files are created only during the first boot after a system panic.
Saves the core file from a system panic, if any, in the /etc/crash directory on the root volume. Core files are created only during the first boot after a system panic.
Friday, January 14, 2011
BaseBoard Management Controller (BMC) setup
You can manage your storage system locally from an Ethernet connection by using any network interface. However, to manage your storage system remotely, the system should have a Remote LAN Module (RLM) or Baseboard Management Controller (BMC). These provide remote platform management capabilities, including remote access, monitoring, troubleshooting, and alerting features.
cherrytop# bmc setup
The Baseboard Managment Controller (BMC) provides remote managment capabilities
including console redirection, logging and power control.
It also extends autosupport by sending down filer event alerts
Would you like to configure the BMC? (y/n)? y
Would you like to enable DHCP on BMC LAN interface? (y/n)? n
Please enter the IP address for the BMC [0.0.0.0]: x.x.x.x
Please enter the netmask for the BMC [0.0.0.0]: x.x.x.x
Please enter the IP address for the BMC gateway [0.0.0.0]: x.x.x.x
Please enter the gratuitous ARP Interval for the BMC [10 sec (max 60)]:
The BMC is setup successfully.
The Baseboard Managment Controller (BMC) provides remote managment capabilities
including console redirection, logging and power control.
It also extends autosupport by sending down filer event alerts
Would you like to configure the BMC? (y/n)? y
Would you like to enable DHCP on BMC LAN interface? (y/n)? n
Please enter the IP address for the BMC [0.0.0.0]: x.x.x.x
Please enter the netmask for the BMC [0.0.0.0]: x.x.x.x
Please enter the IP address for the BMC gateway [0.0.0.0]: x.x.x.x
Please enter the gratuitous ARP Interval for the BMC [10 sec (max 60)]:
The BMC is setup successfully.
The following commands are available; for more information
type "bmc help"
bmc help
type "bmc help
bmc help
bmc setup
bmc status
bmc test
bmc reboot
bmc reboot
This can be done online and is transparent to the Filer/servers connected to the NetApp Array.
Subscribe to:
Posts (Atom)