Network Monitoring For Satellite Teleconference, Distance Learning, and Media Distribution

Today’s Satellite Systems

Many satellite system projects these days involve satellite uplinks with hundreds or perhaps thousands of receivers in the network receiving content from them. Traditionally, if these networks are monitored, the planners rely on SNMP traps for big troubles and plan for round-robin polling and pinging to determine the health of the receivers and other devices.

Traditional monitoring systems, because of polling speed, within an hour or two NOC (network operations center) personnel can know system status for all devices.  For example:

  • is device alive
  • can I ping it
  • is the receiver on the right channel
  • signal level
  • locked for the transmission
  • error rates

As to the traps, a little secret: many of the devices do not support traps and if they do traps can be lost because they are sent as UDP traffic with no assured delivery. Requirements for today’s professional satellite systems have evolved to the point where NOC personnel need more information and they need it more quickly. Representative are teleconference, distance learning, and media distribution systems.

For these new systems the requirements for monitoring have moved out of a strictly maintenance need to both a maintenance and operational requirement.

Teleconference and distance learning have an almost interactive need for status and data:

1. Is the receiver is on the right channel?

2. Is the feed good? Are error rates low and the signal level high?

3. If the signal is low in Milwaukee, what’s the weather like?

4. Are all of the correct materials downloaded to the receiver?

5. Are keypads and other data entry devices ready?

Media distribution systems for broadcast and digital cinema have many of the same needs and some others:

1. Is there sufficient space on the device to receive the huge files used in these operations?

2. What is the progress of the transfer (it takes a long time to transfer multi-gigabyte files)?

3. Did the digital rights management (DRM) keys arrive?

4. If it is a playout device, did the correct play list arrive?

5. Can we get the playback logs as events play?

6. What about maintenance logs, do we have to SSH into each device and retrieve them manually, or will the system automatically gather and check them for us?

These lists are representative of information the NOC needs to ensure proper operation of the network. First and second generation monitoring systems don’t even begin to broach gathering and reporting the new types of information needed to reliably operate these systems.

Third Generation Network Monitoring

Satellite network monitoring systems must have parallel collection processes in order to have sufficiently fresh data to be of value to the NOC. For many types of operations media content must be tracked. You could argue that media is not part of network monitoring and yet today’s NOC needs this information to ensure proper operation. A new generation of devices are out there delivering media content. Real time, or near real time reporting is needed to insure proper operation of these systems. New ways to visualize the network to go along with these new data sets is also required.

It’s a new game in the network monitoring world: network monitoring software has to move to the next level, including being media-aware, to meet the needs of today’s NOC for information.

MCSE Distributed File System

A Distributed File System (DFS) is a file structure that facilitates sharing of data files and resources by means of consistent storage across a network. The earliest file servers were designed in the 1970s. Following its inception in 1985, Sun’s Network File System (NFS) eventually became the foremost commonly used distributed file system. Aside from NFS, significantly distributed file systems are Common Internet File System (CIFS) and Andrew file system (AFS).

The DFS or Microsoft Distributed File System is an arranged client and server solution that enable a large organization to manage numerous allocated shared file within a distributed file system. It delivers site transparency and redundancy to enhance data accessibility in the midst of a breakdown or extreme load by permitting shares in a number of various locations to be logically arranged under a DFS root or a single folder.

It is a client/server-based service that permits individuals to directly access and process files located on the hosting server as if it had been on their personal computer. Every time an individual access a data on the server, the server transmits a copy of the data file, which is cache on the user’s personal computer while the information is being processed which is subsequently returned to the server.

Whenever individuals attempt to gain access to a share found off the DFS root, the individual is actually going through a DFS link allowing the DFS server to automatically re-direct it to the appropriate share and file server.

There can be two methods for utilizing DFS on a Windows Server:

A Standalone or Distinct DFS root provides you with a DFS root found only on the local computer, which therefore does not make use of Active Directory. A Standalone DFS can only be accessed on the local PC where it was made. It does not feature any kind of fault tolerance and could not be connected to any other DFS.

Domain-based DFS roots can be found within Active Directory which enables you to have their information and facts distributed to any number of domain controllers located in the domain; this provides you with fault tolerance to DFS. DFS roots that can be found on a domain needs to be hosted on a domain controller. This is to make sure that links with identical target get hold of all their duplicated data through the network. The file and root data is replicated by means of the Microsoft File Replication Service (FRS).

Advantages of DFS

1. Easy accessibility: individuals do not need to be aware of various locations from where they acquire data. Simply by remembering a single location they will have access to the data.

2. Fail Tolerance: for master DFS hosting server it is possible to obtain a duplicate (Target) on yet another DFS Server. With the help of the master DFS server end users are still able to continue on accessing the data from a back-up DFS (Target). There is absolutely no interruption in being able to access information.

3. Load Balancing: in the event that all of the DFS root servers and targets are operating in good condition, it results in Load balancing. This is often accomplished by indicating locations for different users.

4. Security and safety: By making use of the NTFS configuration, security is put into practice.

Various Linux Hosting Distribution

The Linux hosting system was first introduced in 1991 and it was a well received operating system until today. Nowadays, users see Linux hosting as a wonderful replacement for Windows or Mac OS X system. Initially, it was only used as a server solution but it is making its way into homes now. One of the biggest reason for it to be so popular is because it is an open-sourced solution. This way, many developers are constantly providing new stuff for it. There are a few ways on how Linux is distributed and this is what we will take a look at in this article.

The first distribution is called Ubuntu. It is considered as the most widely used distribution channel because it is made more for a desktop compatibility instead of a server. Many of its features can match up with the Windows system. Therefore, Ubuntu is a strong player when it comes to making a choice for a hosting solution.

Next, we have the Kubuntu distribution. From its name alone, you can tell that it might have a similarity to the Ubuntu system. In fact, it is very similar to Ubuntu except that it is using a different file system. Both distribution have similar functions and are very user-friendly compared to other distribution of Linux. Therefore, many users opt for this system because they do not have to deal with complicated server management tasks.

Then, we have the Debian distribution. This distribution is a more complex system which may be a little harder for users who do not have the knowledge. This is due to its superior flexibility and performance which can be well used for either a desktop or a server environment.

Following that, we have the Fedora system. Unlike the other, this distribution is considered as a light weight distribution which often is included in dedicated hosting servers. Based on the RedHat Linux, Fedora is commonly used commercially. Therefore, it is competing with Microsoft Windows. Operating Fedora does not require big resources. It can run with limited resources but its performance will still be top notch.

Last but not least, there is the CentOS distribution. CentOS which stand for Community Enterprise Operating System are also a distribution based on the RedHat Linux but it is a free source distribution. CentOs is used for commercial development because of its stability and security.