8 | | Manipulation and communication of wireless interface parameters, was achieved by modifying open-source Linux device drivers for atheros and Intel a/b/g 802.11 cards. |
| 10 | Manipulation and communication of wireless interface parameters, was achieved by modifying open-source Linux device drivers for Atheros and Intel a/b/g 802.11 cards. |
| 11 | |
| 12 | == Introduction == |
| 13 | |
| 14 | Network simulation tools, like ns-2 and OPNET have contributed substantially to the development of networking protocols over a number of years in both academia and industry. The validity of results from these tools is such that they are still the preferred way to determine a first approximation to the actual results. There are numerous publications and doctorate dissertations where simulation results, obtained using these tools, form the principal basis for the proposed solution. However, these tools were mainly developed for wired network scenarios where the physical layer is much more predictable and uncomplicated as compared to wireless networks. The use of these tools for such networks demands accurate physical-layer models and their subsequent integration. Although accurate mathematical models exist, it is very difficult to implement these in a simulator given the inherent random nature of every aspect of this physical layer. Implementation is further complicated by the additional facet of mobility of nodes, in such a network, and the random manner in which it occurs. |
| 15 | |
| 16 | The OPNET simulator has an implementation of the wireless physical layer that has appreciable support for some aspects but is quite simplistic in other aspects. Additionally, being a commercial product, the complete source code is not available for modification or incremental addition, which is a fundamental requirement in this kind of research. For ns-2, the source code is freely available but the physical layer models are mostly deterministic, which in turn produce incomplete and biased results. |
| 17 | |
| 18 | Emulation is seen as an alternative technique to study these networks, where the physical layer characteristics remain the same as in real-world wireless networks, except for mobility, which is the emulated feature. This is the approach adopted by the proposed ORBIT (Open Access Research Testbed for Next-Generation Wireless Networks) testbed. As the name suggests, this will be an open-access multi-user experimental (MXF) facility to support research on next-generation wireless networks. |
| 19 | |
| 20 | As part of this effort, open-source software tools need to be developed to perform the fundamental functions of (a) setting up and conducting an experiment and (b) collecting and storing the results. In order to conduct an experiment, the basic functionalities, such as (a) transmission and reception of frames, (b) the ability to tune interface parameters and (c) the ability to control MAC level mechanisms, are required. Libmac, a user-level C library attempts to provide these features. |
| 21 | |
| 22 | == Design == |
| 23 | |
| 24 | === Motivation === |
| 25 | |
| 26 | Currently, protocol development is a two-stage process where specific ideas for the modification of an existing protocol, or the development of a new one, are first implemented in a simulator and then, based on the results, implemented in an open-source kernel. This two-stage process is justified when the first stage provides results with an acceptable loss of accuracy, in a time frame which is smaller than what would be available if the first stage itself was kernel implementation. Simulator designers have to make this tradeoff so that implementors have a first approximation to the actual results as fast as possible, using which, they can decide whether or not they want to invest their time and effort in the second stage. In addition, protocol implementations in a simulator have the advantages of development ease and flexibility. |
| 27 | |
| 28 | However, as discussed earlier, the loss of accuracy for wireless network scenarios, in the simulators that are currently available, is not acceptable. Therefore, we have to look at: |
| 29 | * either using just a single-stage protocol development process where we directly modify or implement the protocol in the kernel or |
| 30 | * replacing the first stage with a process that is more accurate than the existing one. |
| 31 | |
| 32 | The advantages of the first approach are: |
| 33 | * kernel-level implementations of the protocol provide complete and accurate information about the performance, |
| 34 | * are efficient and |
| 35 | * take into account the interworking between the protocol and all the other system components. |
| 36 | |
| 37 | The disadvantages are: |
| 38 | * the requirement of a high level of programming expertise, |
| 39 | * the lack of debugging tools, |
| 40 | * the presence of protection mechanisms, |
| 41 | * the requirement of low-level languages, like C and assembly, |
| 42 | * the high risk involved ('buggy' kernel code can bring down the entire system) and |
| 43 | * the strict layering approach of wired network protocols and their subsequent implementation, which implies that newer philosophies of cross-layer design will prove difficult, if not impossible, to implement in existing stable versions. |
| 44 | |
| 45 | Thus, kernel-level implementations provide the real-basis for performance analysis but they are time-consuming and difficult. Additionally, the existing two-stage philosophy will be hard to replace by this single-stage approach because of the widespread acceptance and use of the former approach. |
| 46 | |
| 47 | In keeping with the spirit of the two-stage philosophy, emulation is seen as acceptable replacement for the first-stage. It can be designed to provide the same advantages of simulators, namely, |
| 48 | * development ease and flexibility and |
| 49 | * speed of execution. |
| 50 | |
| 51 | In addition, accuracy levels will be much more closer to real-world experiments. |