FAQ

Frequently asked questions

This page tries to answer to the beginner's questions about the NetApp filersand also to the frequently asked administration questions. The answers are not exhaustive and thus, for any other informations, please go to to the Now site of NetApp: all documentations are available for the last version of ONTAP.

When necessary, the answers give the command to execute to response to the problem. The manipulation with FilerView are not given but can easily found in the corresponding menus. However, beware of all operations are not faisable with FilerView.

Questions:

Click on the links below to display the solution of the question:

Answers:

  • How to configure the host name of my filer?

The name of the filer can be changed by the "hostname" command. But beware of 2 conditions below:

  1. The new name must correspond to a valid IP address: a line "X.Y.Z.T foo" must exist in the file /etc/hosts. If not, the hostname command will give an error: "hostname: hostname must be in /etc/hosts"
  2. The startup file /etc/rc uses the command "ifconfig e0a `hostname`-e0a netmask X.Y.Z.T" (the interface can have another name). So, an entry "<hostname>-e0a" must exist in the /etc/hosts and the corresponding address will be associate to this interface (the name"`hostname`-e0a" will be replaced by the filer name followed by "-e0a" at execution time)
  • How to run FilerView?

Simply run your favorite Internet browser and point it to the address "http://<filername>/na_admin". The FilerView application is the first choice on the displayed page.

  • How to configure the network interfaces?
There are 2 files concerned by this configuration: /etc/rc and /etc/hosts. The file /etc/rc contains the "ifconfig" command which defines the IP address following the host name, the subnet mask, the packet size (MTU), the flow control,... Refer to the man pages of the "ifconfig" command for more explanations. The file /etc/hosts contains the IP address of all interfaces
  • How to configure a virtual interface (VIF)?

A virtual interface is created by the "vif" command. Two kind of vif exists:

  1. Single: in this mode, only one physical interface is active at a time. The other ones are in "standby" and they will be used only if there is a fault in the active link. This mode could be named "failover"
  2. Multiple: in this mode, all physical interfaces are used and they take part in a load balancing. This mode is compliant to the IEEE 802.3ad (static configuration). It must be used with a switch which supports trunking/aggregation over multiple port connections (all ports shares the same MAC address).

Base command: vif create [ single | multi ] vif_name [ -b {rr|mac|ip} ] [ interface_list ]

Single mode exemple: vif create single turlututu e0a e0b e0b e0c

Multi mode exemple: vif create multiple turlututu -b rr e0d e0e

The "-b" option is usable only with multi mode and must correspond to the parameters of the switch. To add an IP address to the vif, simply use the ifconfig command: ifconfig turlutu 192.168.0.1.

  • How to configure and test the autosupport?

NetApp (and Bull too) recommends to configure the autosupport feature: in case of fault, a mail will be sent to the specified mailbox.

The mandatory parameters are:

    • autosupport.enable: Must be "on" to validate autosupport
    • autosupport.from: This is the name which appears in the mail as the sender. In some mail configuration, it can be mandatory to use an existent mailbox name because some mail server will refuse the message if the sender is unknown
    • autosupport.mailhost: This is the SMTP server which will be used to send the message. It is possible to give several names, separated by comma.
    • autosupport.to: This is the destination address of the (full) mail. Five address can be specified.
    • autosupport.noteto: This is the destination address of the (light) mail. Five address can be specified.
    • autosupport.support.*: These options are related to the sending of informations to NetApp
      • autosupport.support.enable: Must be "on" to validate autosupport to NetApp
      • autosupport.support.transport: Must be "https", "http" or "smtp"
      • autosupport.support.proxy: This is the name of the proxy server in case of use of "http" or "https" transport
Syntax: URL:Port - Ex: www-proxy.netapp.com:8080
Or when an authenification is necessary: user:password@URL:Port - Ex: john:unknown@www-proxy.netapp.com:8080
      • autosupport.support.to: Internal address of NetApp - don't modify it
      • autosupport.support.url: Internal address of NetApp - don't modify it
    • autosupport.doit: By default, the value of this parameter is "DONT". If you change it to another value, it will trigger the sending of the autosupport and it will be set back to "DONT". This is the way to test that the parameters are corrects.

To display or set these parameters, use the "options" command

  • How to set date and time?

There are several commands to do this:

    • date: 

format: date [ -u ] [ [[[[cc]yy]mm]dd]hhmm[.ss] ]

Where:
cc: The 2 digits of the century (currently, 20 and for a long time)
yy: The last 2 digits of the year (i.e. "06")
mm: The 2 digits of the month - With a "0" if there is only one digit
dd: The day of the month (from 00 to 31)
hh: The hours of the day (from 00 to 23)
mm: the minutes of the hour (from 00 to 59)

The "-u" option can be specified to display or set the date in GMT format.

Without parameters, "date" displays the current date and time.

    • rdate

format: rdate <hostname>

This command uses the date and time of a remote server to set the local time of the filer."hostname" is the remote server.

    • Automatic synchronization

It is possible to set the filer to automate the date/time update. The following parameters are used to do this:

      • timed.enable: must be "on" to authorize the synchronisation to an external source
      • timed.proto: it can be "rtc" to use the internal clock, "rdate" to use the rdate prorocol or "sntp" to use the NTP protocol
      • timed.servers: Names of one or several time servers. It is possible to use an internal server but also an Internet time server. The list of this servers can be got from http://www.pool.ntp.org/. However, the synchronization on Internet requires the modification of the firewall rules: the NTP client needs  to connect to the port 123 of the time server using the UDP protocol.
There is other parameters but their default values are appropriate in the most cases. For more explanations, refer to the "System Administration Guide" chapter "Performing General System Maintenance"
  • How to configure the names resolution services?

There are 2 files to configure: /etc/resolv.conf and /etc/nsswitch.conf. The recolv.conf file contains the IP address of the names servers (The keyword "nameserver" followed by the IP address) and the nsswitch.conf file contains the order of query of the different resolution name services. One or several options "dns.enable" and/or "nis.enable" must be validated. This operation is usually set at the filer startup in the file /etc/rc. The option "dns.domainname" must also set.

  • How to configure CIFS?

The base command is "cifs setup". CIFS must be stopped by "cifs terminate" to run this command. After running "cifs setup", the basic setup of CIFS will be activated. If you want add a share, use the command "cifs shares -add <share name> <Qtree>.
Example: cifs shares -add partage /vol/vol1/partage

  • How to access my files from a Windows PC?

Using the same way that you connect a Windows share, by using the choice "Map Network drive..." available in the tools menu of the explorer. For the folder, give the name like \\filername\\sharename and possibly, a different username

  • How to configure NFS?

There is no basis configuration for NFS except when using the Kerberos security mechanism. In this case, we must run "nfs setup" and answer to the questions. To create the NFS shares, there 2 possibility:

    • Modify the /etc/exports file then run the exportfs -a
    • Or run the exportfs command with the "-p" option which will add automaticly the name of the shares to /etc/exports
  • How to access my files from an Unix server?

Using the same way as any Unix server: using the mount command. 
For example on Solaris: mount -F nfs filer:/vol/partage /mnt

On some Unix systems, the command "showmount -e filer" can be used to display the exported shares by this server.
Example:

host> showmount -e numerobis
export list for numerobis:
/vol/vol0/home (everyone)
/vol/vol0      (everyone)

  • What are the tools to diagnose a network problem between a filer and a server?
The below table give several commands to test different network element. The equivalent name on Unix and Windows are given when available:
Filer Unix host Windows host Comments
ping ping ping
pathping
Test the availability of a remote system
getXXbyYY (Privileged command: priv set admin) nslookup (Unix)
host (Linux)
nslookup Query of DNS - The ONTAP command uses the full mechanism of resolution (it uses the file nsswitch.conf - Therefore, it can give the address/name of a server which is in the file /etc/hosts)
traceroute traceroute tracert
pathping
Trace the gateways to reach a specific destination
arp arp arp Display (arp -a) the the MAC <--> adresse IP table (for the local subnet)
dns - - Display (dns info) the configured DNS status - Purge of the DNS cache (dns flush)
netstat netstat netstat Display the network connections
netdiag - - Check of the network interfaces
- showmount - With the "-e" option, display the exported shares of an NFS server (filer included)
  • What is an aggregate?

This element exists since Data ONTAP 7.0. It allowed to introduce the flexible volumes (See definition in this page). An aggregat is the first element to create if a flexible volume is needed.
An aggregat comprises entire physical disk: it is not possible to use a portion of a physical disk in a flexible volume, it is the fulll volume only.
An aggregat comprises at least one RaidGroup: a RaidGroup is a group of physical disks and constitute an self-protected system, i.e. it owns one or several parity disks (one for Raid4 and 2 for RaidDp).
In an aggregat, you can have sevral RaidGroups: If the raidsize of the RaidGroup is 7 and if you put 20 disks in your RaidDp aggregat, there will be 3 RaidGroups (2 of 7 disks and 1 of 6 disks), every RaidGroup will be 2 parity disks for the RaidDp.

  • What is a volume?

Since Data ONTAP 7.0, there is 2 kinds of volume:

    • Traditional volume: before Data ONTAP 7.0, it was the only existing kind of volume. They are only constituted of one or more entire physical disks. It is not possible to split a a physical disk to create a traditional volume.
    • Flexible volume: they exist since Data ONTAP 7.0. It is now possible to create an space not depending on a physical disk number but on wanted space. The minimal size is 20MB. A flexible volume is baed on an aggregat.
  • What is a qtree?

A qtree is an internal partition of a volume. The qtree is usually the units which is exported by NFS or CIFS. It is possible to create a security style Unix or NTFS on a qtree. More, it is possible to give "tree" disk quota to a qtree.

  • How to configure snapshots?

By default, snapshots are configured automaticly at fixed hour and date by Data ONTAP. There are snapshots for the aggregates and for the volumes.

To display the list of snapshots for the aggregates: snap list -A
To display the list of snapshots for the volumes: snap list -V ("-V" is optionnal because it is the default option)
To display the snapshots' schedule: snap sched [-A|-V] ("-A": aggregate - "-V": volume)

With the last command, we get an output like: Volume vol0: 0 2 6@8,12,16,20
This can be translated by: 0 preserved/created weekly snapshot - 2 preserved daily snapshots - 6 preserved hourly snapshots. The hourly snapshot are created at 8:00, 12:00, 16:00 and 20:00. The weekly snapshots are created at midnight every Sunday. The daily snapshots are created every day at midnight every day except Sunday.

To schedule a snapshot, use the command: snap sched [-A|-V]  <volume or aggregat> <scheduling string>
For example: snap sched -V vol0 0 2 6@8,12,16,20

  • I deleted a Windows file by mistake. How to restore it using the snapshots?

If a snapshot has been created between the moment where the file has been created/modified and the moment where it has been deleted, then you can access to the backup of your file using the following method:
Using the "Start\Run...", explore  the filer by the following command: "\\<filername>\<share>\.snapshot (or ~snapshot or ~SNAPSHT)
Filername is the name of your filer
share is the name of the share where were your lost file

The lost file can be found in the subdirectories of ".snapshot".

By default, the directory ".snapshot" is hidden when exploring the filer with Windows. The option "cifs.show_snapshot" modifies this behavior: If this option is fixed to "on", the ".snapshot" directory become visible exploring the share, provided that the PC configuration allow to see the hidden files (Options "Display the hidden files" of the explorer)

  • If I am directly connected to a filer, how to see the content of a directory?

Use the "ls" command. This command is only available when you are in "diag" mode ("priv set diag")

  • What is the reserved space for the snapshots?

By default, ONTAP reserves some space for the aggregate and volume snapshots. When an aggregat is created, 5% of the size of the aggregate is reserved for the snaphots. If a volume is created in this aggregate, 20% of the size of the volume is reserved for the snapshots.
Therefore, if you want a specific size for your volume, you must increase the size by about 32% to get the size of the aggregate (if there is only one volume on your aggregate).

But you do'nt need for the snapshots, you can unvalidate using the following commands:

filer> snap sched -A <aggregat name> 0 0 0	# Unvalidate the snapshot scheduling of this aggregate
filer> snap delete -A -a <aggregat name> # Delete all known snapshots for this aggregate
filer> snap reserve -A <aggregat name> 0 # Set to 0 the reserved space for this aggregate
filer> snap sched <volume name> 0 0 0 # Unvalidate the snapshot scheduling of this volume
filer> snap delete -a <volume name> # Delete all known snapshots for this volume
filer> snap reserve <volume name> 0 # Set to 0 the reserved space for this volume
Pedir una asistencia
Crear y seguir
Búsqueda