話說最近實在是太忙了,
忙著國外分公司的網域整合和mail server的更新~
所以久久沒來更新部落格~~

這是網路上找到的,分享給有需要的人~

====================================================================
Openfiler 2.3 Basic NAS Setup

This is intended for the basic user.  The purpose here is to create a NAS for basic home use.  At some point we will have a Hardware FAQ, but for now, you will have to suffice with picking 1+ year old hardware, and keep it mainstream, for the best compatibility results.  Check the Forums for specific hardware compatibility information if you have questions before buying.  Chipset compatibility is the thing you need to look for the most..  I also assume here that all disks you are using are the same size.  I won't be covering any specific hardware RAID controllers or flash memory drives in this initial doc, either.

For this effort, I have used a LaCie 3TB 1U NAS that has 4x ~750GB SATA disks in it.  It has 256MB and a C7 CPU (roughly a P3/P4 Intel Compatible, w/1Ghz clock - Google and research for details).  Disk configuration is manual as per the installation docs, with a modification in partitioning to use 2048MB on each disk for /boot, /, /var/log, and swap partitions (simplicity now, and later, when we make RAID partitions).  You should start with no partitions on the unit if at all possible.  (Recovery from a past or damaged installation is not covered here).  (EDIT: If you wish to make use of software RAID for the boot environment, detour to this post for how to make a RAID1 boot environment:
https://forums.openfiler.com/viewtopic.php?id=2501 ).  I then manually programmed in the IP address, as is highly suggested, as DHCP really should not be used.  I programmed in a mask, and DNS and gateway when prompted to.  You need DNS to properly contact Openfiler for the code updates after install.  Put in a password, as this is the password for the 'root' user if you are logging in to the console or via SSH terminal.  Upon reboot, I login as root and provide the password used during install.  Run the command 'conary updateall' and let it finish.  Reboot with the 'shutdown -r now' command.  Upon reboot note the https:[address]:446 type address for the GUI. 

Switch to your workstation with (preferably) a Firefox browser and go to that address.  Login with the credentials 'openfiler' and the password 'password'.

Step one in the GUI setup is to enable the LAN(s) that you want Openfiler resources to be seen by.  You will likely want to include the local LAN, of which a typical home LAN might use the values of {name} HomeLAN, {Network} 192.168.0.0, and {mask} 255.255.255.0.  Click add after each entry.

Next, it is assumed you will use the local LDAP server within Openfiler for local authentication.  If you are using external LDAP, or Active Directory authentication, you are probably knowledgeable enough about your corporate LAN to populate the appropriate fields or search out another example in the forums, or wait for me to document it in another more advanced guide.  (My OF2.2 Install Guide in the Forums may provide tips if you get stuck).  With the internal LDAP, you want to enable the following boxes, and fill in the following information:

Click on Accounts, Enable 'Use LDAP' and Local LDAP Server' boxes.  The Base DN has the default of 'dc=example,dc=com', we need a variation of that in the next line, called 'root Bind DN', so paste in there, 'dc=openfiler,dc=example,dc=com'.  Type in a random password for security (we don't care if we know it or not), and mark the box to 'Login SMB server to root DN'.  Apply the changes at the bottom of the screen.

Switch to the Services tab.  The LDAP service should be enabled.  If it is not, try to enable it.  If it won't enable, check your work on the previous screen.  If all is good, then while you are here, enable the SMB / CIFS server and (optionally) the FTP server.  These are the most useful communication methods for basic storage on small LANs.  I won't discuss the other options at this time.

Click back on the Accounts tab, and then the Administration sub-option on the right.  As of this time, you seem to need to manually override the GID (groups) and UID (user) numbers you create, starting at 501 instead of letting them self-populate at 500.  If you need more than one, you will increment with values 502, 503, 504, etc, uniquely for each one you need.  Why? Administration 101 says: you login with user names, who you place in groups, and you assign rights/access to groups.  Additionally, GIDs and UIDs have no other relationship, they are just numbers.  Therefore, create a GID called 'gp', with GID 501, then click the tab to create a user with the name of 'openfiler', UID 501, and in the group 'gp'.  Give it a password you will remember.

Disk partitioning - It is assumed that you cleared out all partitioning during install.  In my configuration, with 4 disks, I plan to use RAID5 for the data environment.  The pros and cons of each RAID configuration are covered elsewhere (search on 'Wiki RAID') and can be researched.  I need a balance of speed, and data redundancy.  It should be noted that I do not have RAID on my Openfiler installation partitions (unless you followed my edit-detour), so that could be a problem if the wrong disk fails, but the only solution to that is to use flash drive/solid state disks for those partitions (except SWAP), and it was not an option for me at this time of writing.  If you only have 2 disks, you options are to run with no safety net (no RAID, or RAID0), or use RAID1 (mirroring).  I will cover RAID5 here as that is what I have to work with.

Click on the Volumes tab.  Click on the 'create new physical volumes' link.  Under the 'Edit disk' header, you will be going into (clicking on) each disk device and building a RAID partition of the same size.  After clicking a link, scroll down the screen and modify the Partition type to 'RAID array member', and accepting the default (all space, primary partition), then clicking Create.  There is a link to go back one level under the pie chard (note: Internet Explorer may not display the pie chart - an IE rendering bug, I am told, by the developers).  Repeat for each disk. 

After completing the last partition and returning, click on the Software RAID link on the right.  On that screen, modify the RAID Array Type to RAID-5 (parity), and because I plan to use this for only larger-sized files, I bumped the chunk size up to 512K, but the default 64k is sufficient for most generic applications.  I also selected all four disks under the 'X'.  Click Add Array when all are present.  On the Software RAID Management screen that appears, you will get a state/synchronization status of 'Clean & degraded, Not started'.  This is the system initializing the array.  Allow some time for this to complete, or you risk corruption and starting over.  By time, we are talking however long for it to finish.  It will start the sync process at some point in about 5 minutes, and take however long (dependent on your disk/array sizes).  Do not shut down the unit or reboot until it completes.  It may take a couple of hours if your disks are larger.  Once it starts syncing, you will see," Clean & degraded & recovering / Progressing x%".  My system (obviously lower on CPU performance) took over 14 hours to complete.

Optional Steps: While you are waiting for the sync to start and complete, there is an option to set up the smartd disk monitoring subsystem in the command line if you care to venture there.  It is not required.  If you are game to set it up, then before you get out of the GUI for awhile, click on the 'members' link next to your array.  Note the devices and how they are listed (i.e. /dev/hda2, /dev/hdb2, or /dev/sda2, /dev/hdb2, and similar).  We don't care about the digit, but the other parts (letters) are the designations for the drives.  You may have noticed an error during bootup for 'SMARTD' unless you use IDE disks.  The default is to monitor /dev/hda, or the first IDE disk.  The file to modify is in /etc, and you can login to the console or via SSH terminal with the 'root' account, the use the command 'nano -w /etc/smartd.conf' - scroll down and note the existing entry.  In my case, my disks are seen as /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd, so I modify the existing '/dev/hda' to read 'dev/sda', and then add additional lines similar to reference the other disks in my RAID.  The options for this file are available online, and support more complex needs, but for now the defaults will send it to the NAS's internal mail handler.  The next time you reboot, it should not error out.  If it does, your controller or drives may not support SMART monitoring.  (EDIT: The SATA  disk drives in this unit did not support SMART monitoring, later verified with the smartctl command).

(Note: Hardware RAID controllers generally don't support SMART monitoring, so this will not apply to those situations where RAID is handled on a hardware controller - you would generally not be building a software RAID if you have a functional hardware RAID controller, either). 

There is some housekeeping steps further down (except for the part about backup) that you can do that should not cause any problems with the sync process.  Just don't start the data partition creation step until the software RAID is completed the rebuilding phase.  Otherwise stop here and let it finish the sync process.


The next step is to create the shares, and in that process, you need to create the folders to share. 

Click on the Volume Groups side option under Volumes.  For simple all-space shares, I use 'vg' for the volume group.  More complex environments might want to use better naming conventions, but learn how they produce the share names first, and keep them short unless you like typing a lot.  Select your disk space, and in this case it's /dev/md0 (my RAID partition), and click 'Add Volume Group'.  Next, click on the Add Volume item on the sub menu on the right.  Scroll down and enter 'vol1' or something simple.  Enter a note for your reference (I used 'All Disk space'), and expand the size as needed (up to the max).  Leave the file system as XFS unless you feel you have reason to make the change to 'ext3' (and iSCSI is not actually a file system, it's for iSCSI direct attached storage mode, and not covered here).  Click Create.  Repeat this set of steps if you did not use all the space and want to build additional volumes in this partition.

Next, pick menu item 'Shares'.  Your volume space(s) should now be present with a link.  Click on the link, and create a folder.  This is likely to be where you are going to build a share, so pick a relevant name.  You can make deeper folders and share them, too, within the directory structure, but that will depend on unique needs.  Click on the folder link you just created, and click the Make Share button without making any other changes in that box.  On the next screen, you have the chance to define the share name.  You will likely not realize that the default share name is a combination of volumegroup.volume.foldername by default, but it can be overridden by supplying the override share name.  This is where to use share name that applies to the situation you need it for.  It will allow you to reference the share in the format of  //IP_or_HostName/ShareName - which is why the override is preferred as long as you are not in a complex environment.  Click Change after each change you make, then scroll down and enable Controlled Access, followed by 'Update', then scroll all the way down and click the buttons for PG (required) and RW (for Read/Write) next to your share.  You do need to define a primary group.  Click Update in that section.  Further down at the bottom, you need to define the type of accesses permitted and the LANs that they can originate from.  You need only enable SMB/CIFS and FTP at this time, as that is the services that you enabled earlier.  You typically enable RW for Read/Write.  Click Update.  Repeat the process if you have decided to make more than one share, or have more than one volume to create a share on.


Map A Drive

(Under Windows) If you just want URL access to the space to test, you can use \\IPAddress\ShareName format in Explorer, and supply the login and password credentials you set up.  You can also do a Map Network Drive and supply the share path and the credentials.

Under Linux, you will need to make a folder for the mount point, of which VMWare people typically use /vmimages/smb or similar folder paths.  Then in the /etc/fstab file, you edit with 'nano -w /etc/fstab' and enter a line to link the mount point to the network share.  An example line might be:

//some.hostname.com/backups  /vmimages/smb1 smbfs rw,dmask=777,fmask=777,credentials=/root/.credentials/.smb_access 0 0

Modify the first two items, which are [Your Share on OF] and [Your Path for the share to be seen locally]

This also requires you to create a hidden folder: 'mkdir /root/.credentials' and edit the hidden file: 'nano /root/.credentials/.smb_access' and placed the following login and password lines (modify for your credentials):

username=openfiler
password=somepassword

Use Ctrl-X to exit and follow the prompts to save the file.

If all is good, you can mount the share with 'mount -a' and no errors return back.



Some Additional Housekeeping


Time/Clock - Set the time correctly, or let NTP do it for you.

On the System tab, Clock Setup sub-section,you can use one of the suggested ntp servers, however, if you go to ntp.org, you can get the name of the regional ntp server you should be polling.  In the United States, the servers are 0.us.pool.ntp.org, 1.us.pool.ntp.org, or 2.us.pool.ntp.org.  Pick one and click Setup Synchronization.  You can also adjust the timezone if you improperly set it during install.


Notifications - It might be nice to know when something is wrong.

If you are in a network environment with a mail server, you can have the system send alerts for trouble.  Enter a recipient e-mail address (yourmailbox@yourdomain.com), and the sender you usually make then name up of hostname@yourdomain.com, followed by the DNS name of your mail server. 

Otherwise, you may want to enable audible alarms to alert you to problems.  To retrieve internal (to the NAS) alert messages, you login to the command line (console, or ssh, login as 'root'), and type 'mail' as a command.  It will tell you how many messages there are.  Type in a number to read a specific message.  '?' gives you a help menu.  To delete messages, you can type in the format of 'd 1' where d=delete, and 1=message number one. 


Backup - Perform a backup off the System tab just in case.  Let it save a copy to your workstation.

End tutorial.

There will probably be more to come at some point in the future.

Suggestions and corrections are welcome.  If you have a problem, and searching the forum does not produce useful info, please start a fresh thread.

And thanks to maxim for his starter thread here: 
https://forums.openfiler.com/viewtopic.php?id=2423which got me motivated, and LaCie for giving enough grief in a short time to make us consider wiping their Embeded XP software off and replacing it with Openfiler 2.x more than once.

資料來源
http://blog.smps.tp.edu.tw/f2blog/index.php?load=read&id=180

創作者介紹
創作者 ITMan 的頭像
ITMan

挨踢人

ITMan 發表在 痞客邦 留言(2) 人氣()


留言列表 (2)

發表留言
  • 訪客
  • ♀最快﹂速﹉最□廉♂價的﹏行銷●,請﹏使用♂全自﹉動﹂行銷♀軟※體﹍

  • j6d63e
  • ﹉跟前夫§相§識○六○年○,﹎直﹍到﹍最﹍近☉才☉看☉清﹉楚﹉這﹉斯﹉會家庭暴□力□的~男人,竟§是為了外面的小三,原本預§期§會有相夫教□子◎的◎美好情況,﹉真﹉想﹋不☉到♀電♀視♀劇﹎發﹎生﹎在○哀○家◇身◇上○,◎更◎生﹍氣﹍也﹍把﹌我☉手☉機☆line對﹉話﹉紀﹉錄﹉資料和照□片□刪~除~,幸好上網找到硬碟什麼醫院,data1,§com,§tw順○利○讓○我﹎結﹍束﹍這﹍段☉婚☉姻☉
    url.searu.org/MH