Sunday, March 13, 2011

When do we use the bye, halt, and reboot commands?

-- The bye command works at the ok prompt (ok>) to reboot the filer
-- The
halt command works at the at filer prompt, it terminates all services (example: CIFS) and brings the filer to the ok prompt (ok>)
-- The reboot command works at the filer prompt (filer1>) to reboot the filer


Using the halt and bye command 
cherrytop#> halt
 CIFS local server is shutting down... 

CIFS local server has shut down...
Tue Aug 3 15:57:01 PDT [kern.shutdown:notice]: System shut down because : "halt".
Tue Aug 3 15:57:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown
Program terminated
ok reboot (shows that reboot only works from the filer prompt)
reboot ?
ok bye

Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Es

Memory size is 512 MB
Testing SIO
Testing LCD
Probing devices
Testing 512MB
256 to 320MB
Skipping further tests
Complete
Finding image...
Loading isa floppy
Floppy not first boot disk
No floppy disk found.

Booting from fcal
Loading /pci2/fcal@7/disk@10
100%
Starting Press CTRL-C for special boot menu


Using the reboot command
cherrytop#> bye (shows that bye only works at the ok prompt)
bye not found. Type '?' for a list of commands cherrytop#> reboot
CIFS local server is shutting down... 
CIFS local server has shut down...
Tue Aug 3 16:46:01 PDT [kern.shutdown:notice]: System shut down because : "rebo
ot".
Tue Aug 3 16:46:01 PDT [iscsi.service.shutdown:info]: iSCSI service shutdown

Intel Open Firmware by FirmWorks
Copyright 1995-2004 FirmWorks, NetApp. All Rights Reserved.
Firmware release 4.2.3_i1
Press Del to abort boot, Esc to skip POST 

Memory size is 512 MB

Note:In a clustered configuration, running the "reboot" command on a node does not cause a CF takeover implicitly. It starts a 90 second counter that allows the rebooted node time to come back online before attempting a takeover. This timer can be monitored with "cf status" while the partner node is rebooting.

Saturday, March 5, 2011

Data ONTAP 8.0.1 7-Mode and later releases - enhancements

Data ONTAP 8.0.1 7-Mode and later releases provide improved performance, resiliency, and management capabilities for storage resources.

Support for 64-bit aggregates:
In Data ONTAP 8.0 7-Mode and later releases, Data ONTAP supports a new type of aggregate, the 64-bit aggregate. 64-bit aggregates have a larger maximum size than aggregates created with earlier versions of Data ONTAP. In addition, volumes contained by 64-bit aggregates have a larger maximum size than volumes contained by aggregates created with earlier versions of Data ONTAP.

Upgrading to Data ONTAP 8.0.1 increases volume capacity after deduplication:
After you upgrade to Data ONTAP 8.0.1 and later from an earlier release of Data ONTAP, you can increase the deduplicated volume size up to 16 TB. 

Maximum simultaneous FlexClone file or FlexClone LUN operations per storage system:
Starting with Data ONTAP 8.0 7-Mode, you can simultaneously run a maximum of 500 FlexClone file or FlexClone LUN operations on a storage system.
 

File space utilization report:
The file space utilization report enables you to see the files and the amount of space that they occupy in a deduplicated volume. You can choose to either move or delete the files to reclaim the space.


Increased maximum RAID group size for SATA disks:
Starting in Data ONTAP 8.0.1, the maximum raid group size allowed for ATA, BSAS, and SATA disks has increased from 16 to 20 disks. The default size remains the same, at 14 disks.


FlexClone files and FlexClone LUNs support on MultiStore:
In Data ONTAP 7.3.3 and later releases of the 7.x release family, and in Data ONTAP 8.0.1 and later, the FlexClone files and FlexClone LUNs commands are available in the default and nondefault vfiler contexts.
 


 


 

NetApp Deduplication volume size limitations

I found this in one of the NetApp documentations for maximum deduplication volume size limits.

Create a new aggregate while zeroing spare disks

cherry-top# aggr create aggr1 -r 14 -d 0a.16 0a.17 0a.19 0a.22 0a.23 0a.24 0a.25 0a.26 0a.27 0a.28 0a.29 0a.32 0a.33 0a.34
aggregate has been created with 11 disks added to the aggregate. 3 more disks need
to be zeroed before addition to the aggregate. The process has been initiated
and you will be notified via the system log as the remaining disks are added.
Note however, that if system reboots before the disk zeroing is complete, the
volume won't exist.


cherry-top#  vol status -s


Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0a.35 0a 2 3 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.36 0a 2 4 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.37 0a 2 5 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.38 0a 2 6 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.39 0a 2 7 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.40 0a 2 8 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.41 0a 2 9 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.42 0a 2 10 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.43 0a 2 11 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.44 0a 2 12 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)
spare 0a.45 0a 2 13 FC:A - ATA 7200 635555/1301618176 635858/1302238304 (zeroing, 2% done)


cherry-top# aggr status -v
Aggr State Status Options
aggr1 creating raid_dp, aggr nosnap=off, raidtype=raid_dp,
initializing raidsize=14,
ignore_inconsistent=off,
snapmirrored=off,
resyncsnaptime=60,
fs_size_fixed=off,
snapshot_autodelete=off,
lost_write_protect=off
Volumes:

Plex /aggr1/plex0: offline, empty, active




Friday, March 4, 2011

Cluster giveback cancelled or waiting

Cluster node is down and waiting for giveback. Trying to perform a cf giveback from the partner which has taken over generates the following error message:

filer(takeover)> cf giveback
filer(takeover)> Thu Dec 21 21:01:55 EST [filer (takeover): cf_main:error]: Backup/restore services: There are active backup/restore sessions on the partner.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.misc.operatorGiveback:info]:
Cluster monitor: giveback initiated by operator
Thu Dec 21 21:01:55 EST [filer (takeover): snapmirror.givebackCancel:error]: SnapMirror currently transferring or in-sync, cancelling giveback.
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: snapmirror: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.rsrc.givebackVeto:error]: Cluster monitor: dump/restore: giveback cancelled due to active state
Thu Dec 21 21:01:55 EST [filer (takeover): cf.fm.givebackCancelled:warning]: Cluster monitor: giveback cancelled


To resolve, make sure the  following outputs has no existing relationships or active sessions.

# snapmirror status -l
# snapvault status 
# ndmpd status (kill any active sessions)

Issue cf giveback on the system after confirming all the above steps are verified.