Distributed File System Or Centralized File Systems?

Many professionals, especially engineers and architects are working from home offices or collaborating with small teams no longer centralized in a home office location, but rather spread all over the country. How does the engineer in Philadelphia share large CAD files with the General Contractor who is doing the project in Tampa? The old system was to use FTP technology, but there are two key problems with this methodology.

  • The files are large and take a long time to upload and download.
  • The files can have revision issues if two people decide to edit the same file at once.

So, IT professionals have to make decisions. Do they employ a solution like SharePoint for the potential of “File Locking” — technically it is a check in and check out system. Do they invest tens of thousands of dollars at each site for WAN Optimization? Are there other technologies they can use?

The most prevalent solution to these problems is to employ a distributed file system. A distributed file system allows the files to all be “distributed” to each user so that the download time is minimal and changes are merely replicated out to the other users of the files. It works slick when employed properly. There is the speed of the local networks for the opening of the files without the WAN optimization costs and there is the file locking capacity if employed with the right 3rd party software solution.

If your organization has been trying to figure out how to share large files, your group should consider a distributed file system.

Network Monitoring For Satellite Teleconference, Distance Learning, and Media Distribution

Today’s Satellite Systems

Many satellite system projects these days involve satellite uplinks with hundreds or perhaps thousands of receivers in the network receiving content from them. Traditionally, if these networks are monitored, the planners rely on SNMP traps for big troubles and plan for round-robin polling and pinging to determine the health of the receivers and other devices.

Traditional monitoring systems, because of polling speed, within an hour or two NOC (network operations center) personnel can know system status for all devices.  For example:

  • is device alive
  • can I ping it
  • is the receiver on the right channel
  • signal level
  • locked for the transmission
  • error rates

As to the traps, a little secret: many of the devices do not support traps and if they do traps can be lost because they are sent as UDP traffic with no assured delivery. Requirements for today’s professional satellite systems have evolved to the point where NOC personnel need more information and they need it more quickly. Representative are teleconference, distance learning, and media distribution systems.

For these new systems the requirements for monitoring have moved out of a strictly maintenance need to both a maintenance and operational requirement.

Teleconference and distance learning have an almost interactive need for status and data:

1. Is the receiver is on the right channel?

2. Is the feed good? Are error rates low and the signal level high?

3. If the signal is low in Milwaukee, what’s the weather like?

4. Are all of the correct materials downloaded to the receiver?

5. Are keypads and other data entry devices ready?

Media distribution systems for broadcast and digital cinema have many of the same needs and some others:

1. Is there sufficient space on the device to receive the huge files used in these operations?

2. What is the progress of the transfer (it takes a long time to transfer multi-gigabyte files)?

3. Did the digital rights management (DRM) keys arrive?

4. If it is a playout device, did the correct play list arrive?

5. Can we get the playback logs as events play?

6. What about maintenance logs, do we have to SSH into each device and retrieve them manually, or will the system automatically gather and check them for us?

These lists are representative of information the NOC needs to ensure proper operation of the network. First and second generation monitoring systems don’t even begin to broach gathering and reporting the new types of information needed to reliably operate these systems.

Third Generation Network Monitoring

Satellite network monitoring systems must have parallel collection processes in order to have sufficiently fresh data to be of value to the NOC. For many types of operations media content must be tracked. You could argue that media is not part of network monitoring and yet today’s NOC needs this information to ensure proper operation. A new generation of devices are out there delivering media content. Real time, or near real time reporting is needed to insure proper operation of these systems. New ways to visualize the network to go along with these new data sets is also required.

It’s a new game in the network monitoring world: network monitoring software has to move to the next level, including being media-aware, to meet the needs of today’s NOC for information.

MCSE Distributed File System

A Distributed File System (DFS) is a file structure that facilitates sharing of data files and resources by means of consistent storage across a network. The earliest file servers were designed in the 1970s. Following its inception in 1985, Sun’s Network File System (NFS) eventually became the foremost commonly used distributed file system. Aside from NFS, significantly distributed file systems are Common Internet File System (CIFS) and Andrew file system (AFS).

The DFS or Microsoft Distributed File System is an arranged client and server solution that enable a large organization to manage numerous allocated shared file within a distributed file system. It delivers site transparency and redundancy to enhance data accessibility in the midst of a breakdown or extreme load by permitting shares in a number of various locations to be logically arranged under a DFS root or a single folder.

It is a client/server-based service that permits individuals to directly access and process files located on the hosting server as if it had been on their personal computer. Every time an individual access a data on the server, the server transmits a copy of the data file, which is cache on the user’s personal computer while the information is being processed which is subsequently returned to the server.

Whenever individuals attempt to gain access to a share found off the DFS root, the individual is actually going through a DFS link allowing the DFS server to automatically re-direct it to the appropriate share and file server.

There can be two methods for utilizing DFS on a Windows Server:

A Standalone or Distinct DFS root provides you with a DFS root found only on the local computer, which therefore does not make use of Active Directory. A Standalone DFS can only be accessed on the local PC where it was made. It does not feature any kind of fault tolerance and could not be connected to any other DFS.

Domain-based DFS roots can be found within Active Directory which enables you to have their information and facts distributed to any number of domain controllers located in the domain; this provides you with fault tolerance to DFS. DFS roots that can be found on a domain needs to be hosted on a domain controller. This is to make sure that links with identical target get hold of all their duplicated data through the network. The file and root data is replicated by means of the Microsoft File Replication Service (FRS).

Advantages of DFS

1. Easy accessibility: individuals do not need to be aware of various locations from where they acquire data. Simply by remembering a single location they will have access to the data.

2. Fail Tolerance: for master DFS hosting server it is possible to obtain a duplicate (Target) on yet another DFS Server. With the help of the master DFS server end users are still able to continue on accessing the data from a back-up DFS (Target). There is absolutely no interruption in being able to access information.

3. Load Balancing: in the event that all of the DFS root servers and targets are operating in good condition, it results in Load balancing. This is often accomplished by indicating locations for different users.

4. Security and safety: By making use of the NTFS configuration, security is put into practice.