Simulation Manual
OMNeT++ version 6.0.3
1 Introduction
2 Overview
3 The NED Language
4 Simple Modules
5 Messages and Packets
6 Message Definitions
7 The Simulation Library
8 Graphics and Visualization
9 Building Simulation Programs
10 Configuring Simulations
11 Running Simulations
12 Result Recording and Analysis
13 Eventlog
14 Documenting NED and Messages
15 Testing
16 Parallel Distributed Simulation
17 Customizing and Extending OMNeT++
18 Embedding the Simulation Kernel
19 Appendix A: NED Reference
20 Appendix B: NED Language Grammar
21 Appendix C: NED XML Binding
22 Appendix D: NED Functions
23 Appendix E: Message Definitions Grammar
24 Appendix F: Message Class/Field Properties
25 Appendix G: Display String Tags
26 Appendix H: Figure Definitions
27 Appendix I: Configuration Options
28 Appendix J: Result File Formats
29 Appendix K: Eventlog File Format
30 Appendix L: Python API for Chart Scripts
1 Introduction
1.1 What Is OMNeT++?
1.2 Organization of This Manual
2 Overview
2.1 Modeling Concepts
2.1.1 Hierarchical Modules
2.1.2 Module Types
2.1.3 Messages, Gates, Links
2.1.4 Modeling of Packet Transmissions
2.1.5 Parameters
2.1.6 Topology Description Method
2.2 Programming the Algorithms
2.3 Using OMNeT++
2.3.1 Building and Running Simulations
2.3.2 What Is in the Distribution
3 The NED Language
3.1 NED Overview
3.2 NED Quickstart
3.2.1 The Network
3.2.2 Introducing a Channel
3.2.3 The App, Routing, and Queue Simple Modules
3.2.4 The Node Compound Module
3.2.5 Putting It Together
3.3 Simple Modules
3.4 Compound Modules
3.5 Channels
3.6 Parameters
3.6.1 Assigning a Value
3.6.2 Expressions
3.6.3 Parameter References
3.6.4 Volatile Parameters
3.6.5 Mutable Parameters
3.6.6 Units
3.6.7 XML Parameters
3.6.8 Object Parameters and Structured Data
3.6.9 Passing a Formula as Parameter
3.7 Gates
3.8 Submodules
3.9 Connections
3.9.1 Channel Specification
3.9.2 Channel Names
3.10 Multiple Connections
3.10.1 Examples
3.10.2 Connection Patterns
3.11 Parametric Submodule and Connection Types
3.11.1 Parametric Submodule Types
3.11.2 Conditional Parametric Submodules
3.11.3 Parametric Connection Types
3.12 Metadata Annotations (Properties)
3.12.1 Property Indices
3.12.2 Data Model
3.12.3 Overriding and Extending Property Values
3.13 Inheritance
3.14 Packages
3.14.1 Overview
3.14.2 Name Resolution, Imports
3.14.3 Name Resolution With "like"
3.14.4 The Default Package
4 Simple Modules
4.1 Simulation Concepts
4.1.1 Discrete Event Simulation
4.1.2 The Event Loop
4.1.3 Events and Event Execution Order in OMNeT++
4.1.4 Simulation Time
4.1.5 FES Implementation
4.2 Components, Simple Modules, Channels
4.3 Defining Simple Module Types
4.3.1 Overview
4.3.2 Constructor
4.3.3 Initialization and Finalization
4.4 Adding Functionality to cSimpleModule
4.4.1 handleMessage()
4.4.2 activity()
4.4.3 Use Modules Instead of Global Variables
4.4.4 Reusing Module Code via Subclassing
4.5 Accessing Module Parameters
4.5.1 Volatile and Non-Volatile Parameters
4.5.2 Changing a Parameter's Value
4.5.3 Further cPar Methods
4.5.4 Object Parameters
4.5.5 handleParameterChange()
4.6 Accessing Gates and Connections
4.6.1 Gate Objects
4.6.2 Connections
4.6.3 The Connection's Channel
4.7 Sending and Receiving Messages
4.7.1 Self-Messages
4.7.2 Sending Messages
4.7.3 Broadcasts and Retransmissions
4.7.4 Delayed Sending
4.7.5 Direct Message Sending
4.7.6 Packet Transmissions
4.7.7 Receiving Messages with activity()
4.8 Channels
4.8.1 Overview
4.8.2 The Channel API
4.8.3 Channel Examples
4.9 Stopping the Simulation
4.9.1 Normal Termination
4.9.2 Raising Errors
4.10 Finite State Machines
4.10.1 Overview
4.11 Navigating the Module Hierarchy
4.11.1 Module Vectors
4.11.2 Component IDs
4.11.3 Walking Up and Down the Module Hierarchy
4.11.4 Finding Modules by Path
4.11.5 Iterating over Submodules
4.11.6 Navigating Connections
4.12 Direct Method Calls Between Modules
4.13 Dynamic Module Creation
4.13.1 When To Use
4.13.2 Overview
4.13.3 Creating Modules
4.13.4 Deleting Modules
4.13.5 The preDelete() method
4.13.6 Component Weak Pointers
4.13.7 Module Deletion and finish()
4.13.8 Creating Connections
4.13.9 Removing Connections
4.14 Signals
4.14.1 Design Considerations and Rationale
4.14.2 The Signals Mechanism
4.14.3 Listening to Model Changes
4.15 Signal-Based Statistics Recording
4.15.1 Motivation
4.15.2 Declaring Statistics
4.15.3 Statistics Recording for Dynamically Registered Signals
4.15.4 Adding Result Filters and Recorders Programmatically
4.15.5 Emitting Signals
4.15.6 Writing Result Filters and Recorders
5 Messages and Packets
5.1 Overview
5.2 The cMessage Class
5.2.1 Basic Usage
5.2.2 Duplicating Messages
5.2.3 Message IDs
5.2.4 Control Info
5.2.5 Information About the Last Arrival
5.2.6 Display String
5.3 Self-Messages
5.3.1 Using a Message as Self-Message
5.3.2 Context Pointer
5.4 The cPacket Class
5.4.1 Basic Usage
5.4.2 Identifying the Protocol
5.4.3 Information About the Last Transmission
5.4.4 Encapsulating Packets
5.4.5 Reference Counting
5.4.6 Encapsulating Several Packets
5.5 Attaching Objects To a Message
5.5.1 Attaching Objects
5.5.2 Attaching Parameters
6 Message Definitions
6.1 Introduction
6.1.1 The First Message Class
6.1.2 Ingredients of Message Files
6.2 Classes, Messages, Packets, Structs
6.2.1 Classes, Messages, Packets
6.2.2 Structs
6.3 Enums
6.4 Imports
6.5 Namespaces
6.6 Properties
6.6.1 Data Types
6.7 Fields
6.7.1 Scalar fields
6.7.2 Initial Values
6.7.3 Overriding Initial Values from Subclasses
6.7.4 Const Fields
6.7.5 Abstract Fields
6.7.6 Fixed-Size Arrays
6.7.7 Variable-Size Arrays
6.7.8 Classes and Structs as Fields
6.7.9 Non-Owning Pointer Fields
6.7.10 Owning Pointer Fields
6.8 Literal C++ Blocks
6.9 Using External C++ Types
6.10 Customizing the Generated Class
6.10.1 Customizing Method Names
6.10.2 Injecting Code into Methods
6.10.3 Generating str()
6.10.4 Custom-implementation Methods
6.10.5 Custom Fields
6.10.6 Customizing the Class via Inheritance
6.10.7 Using an Abstract Field
6.11 Descriptor Classes
6.11.1 cClassDescriptor
6.11.2 Controlling Descriptor Generation
6.11.3 Generating Descriptors For Existing Classes
6.11.4 Field Metadata
6.11.5 Method Name Properties
6.11.6 toString/fromString
6.11.7 toValue/fromValue
6.11.8 Field Modifiers
7 The Simulation Library
7.1 Fundamentals
7.1.1 Using the Library
7.1.2 The cObject Base Class
7.1.3 Iterators
7.1.4 Runtime Errors
7.2 Logging from Modules
7.2.1 Log Output
7.2.2 Log Levels
7.2.3 Log Statements
7.2.4 Log Categories
7.2.5 Composition and New lines
7.2.6 Implementation
7.3 Random Number Generators
7.3.1 RNG Implementations
7.3.2 Global and Component-Local RNGs
7.3.3 Accessing the RNGs
7.4 Generating Random Variates
7.4.1 Component Methods
7.4.2 Random Number Stream Classes
7.4.3 Generator Functions
7.4.4 Random Numbers from Histograms
7.4.5 Adding New Distributions
7.5 Container Classes
7.5.1 Queue class: cQueue
7.5.2 Expandable Array: cArray
7.6 Routing Support: cTopology
7.6.1 Overview
7.6.2 Basic Usage
7.6.3 Shortest Paths
7.6.4 Manipulating the graph
7.7 Pattern Matching
7.7.1 cPatternMatcher
7.7.2 cMatchExpression
7.8 Dynamic Expression Evaluation
7.9 Collecting Summary Statistics and Histograms
7.9.1 cStdDev
7.9.2 cHistogram
7.9.3 cPSquare
7.9.4 cKSplit
7.10 Recording Simulation Results
7.10.1 Output Vectors: cOutVector
7.10.2 Output Scalars
7.11 Watches and Snapshots
7.11.1 Basic Watches
7.11.2 Read-write Watches
7.11.3 Structured Watches
7.11.4 STL Watches
7.11.5 Snapshots
7.11.6 Getting Coroutine Stack Usage
7.12 Defining New NED Functions
7.12.1 Define_NED_Function()
7.12.2 Define_NED_Math_Function()
7.13 Deriving New Classes
7.13.1 cObject or Not?
7.13.2 cObject Virtual Methods
7.13.3 Class Registration
7.13.4 Details
7.14 Object Ownership Management
7.14.1 The Ownership Tree
7.14.2 Managing Ownership
8 Graphics and Visualization
8.1 Overview
8.2 Placement of Visualization Code
8.2.1 The refreshDisplay() Method
8.2.2 Advantages
8.2.3 Why is refreshDisplay() const?
8.3 Smooth Animation
8.3.1 Concepts
8.3.2 Smooth vs. Traditional Animation
8.3.3 The Choice of Animation Speed
8.3.4 Holds
8.3.5 Disabling Built-In Animations
8.4 Display Strings
8.4.1 Syntax and Placement
8.4.2 Inheritance
8.4.3 Submodule Tags
8.4.4 Background Tags
8.4.5 Connection Display Strings
8.4.6 Message Display Strings
8.4.7 Parameter Substitution
8.4.8 Colors
8.4.9 Icons
8.4.10 Layouting
8.4.11 Changing Display Strings at Runtime
8.5 Bubbles
8.6 The Canvas
8.6.1 Overview
8.6.2 Creating, Accessing and Viewing Canvases
8.6.3 Figure Classes
8.6.4 The Figure Tree
8.6.5 Creating and Manipulating Figures from NED and C++
8.6.6 Stacking Order
8.6.7 Transforms
8.6.8 Showing/Hiding Figures
8.6.9 Figure Tooltip, Associated Object
8.6.10 Specifying Positions, Colors, Fonts and Other Properties
8.6.11 Primitive Figures
8.6.12 Compound Figures
8.6.13 Self-Refreshing Figures
8.6.14 Figures with Custom Renderers
8.7 3D Visualization
8.7.1 Introduction
8.7.2 The OMNeT++ API for OpenSceneGraph
8.7.3 Using OSG
8.7.4 Using osgEarth
8.7.5 OpenSceneGraph/osgEarth Programming Resources
9 Building Simulation Programs
9.1 Overview
9.2 Using opp_makemake and Makefiles
9.2.1 Command-line Options
9.2.2 Basic Use
9.2.3 Debug and Release Builds
9.2.4 Debugging the Makefile
9.2.5 Using External C/C++ Libraries
9.2.6 Building Directory Trees
9.2.7 Dependency Handling
9.2.8 Out-of-Directory Build
9.2.9 Building Shared and Static Libraries
9.2.10 Recursive Builds
9.2.11 Customizing the Makefile
9.2.12 Projects with Multiple Source Trees
9.2.13 A Multi-Directory Example
9.3 Project Features
9.3.1 What is a Project Feature
9.3.2 The opp_featuretool Program
9.3.3 The .oppfeatures File
9.3.4 How to Introduce a Project Feature
10 Configuring Simulations
10.1 The Configuration File
10.1.1 An Example
10.1.2 File Syntax
10.1.3 File Inclusion
10.2 Sections
10.2.1 The [General] Section
10.2.2 Named Configurations
10.2.3 Section Inheritance
10.3 Assigning Module Parameters
10.3.1 Using Wildcard Patterns
10.3.2 Using the Default Values
10.4 Parameter Studies
10.4.1 Iterations
10.4.2 Named Iteration Variables
10.4.3 Parallel Iteration
10.4.4 Predefined Variables, Run ID
10.4.5 Constraint Expression
10.4.6 Repeating Runs with Different Seeds
10.4.7 Experiment-Measurement-Replication
10.5 Configuring the Random Number Generators
10.5.1 Number of RNGs
10.5.2 RNG Choice
10.5.3 RNG Mapping
10.5.4 Automatic Seed Selection
10.5.5 Manual Seed Configuration
10.6 Logging
10.6.1 Compile-Time Filtering
10.6.2 Runtime Filtering
10.6.3 Log Prefix Format
10.6.4 Configuring Logging in Cmdenv
10.6.5 Configuring Logging in Qtenv
11 Running Simulations
11.1 Introduction
11.2 Simulation Executables vs Libraries
11.3 Command-Line Options
11.4 Configuration Options on the Command Line
11.5 Specifying Ini Files
11.6 Specifying the NED Path
11.7 Selecting a User Interface
11.8 Selecting Configurations and Runs
11.8.1 Run Filter Syntax
11.8.2 The Query Option
11.9 Loading Extra Libraries
11.10 Stopping Condition
11.11 Controlling the Output
11.12 Debugging
11.13 Debugging Leaked Messages
11.14 Debugging Other Memory Problems
11.15 Profiling
11.16 Checkpointing
11.17 Using Cmdenv
11.17.1 Sample Output
11.17.2 Selecting Runs, Batch Operation
11.17.3 Express Mode
11.17.4 Other Options
11.18 The Qtenv Graphical User Interface
11.18.1 Command-Line and Configuration Options
11.19 Running Simulation Campaigns
11.19.1 The Naive Approach
11.19.2 Using opp_runall
11.19.3 Exploiting Clusters
11.20 Akaroa Support: Multiple Replications in Parallel
11.20.1 Introduction
11.20.2 What Is Akaroa
11.20.3 Using Akaroa with OMNeT++
12 Result Recording and Analysis
12.1 Result Recording
12.1.1 Using Signals and Declared Statistics
12.1.2 Direct Result Recording
12.2 Configuring Result Collection
12.2.1 Result File Names
12.2.2 Enabling/Disabling Result Items
12.2.3 Selecting Recording Modes for Signal-Based Statistics
12.2.4 Warm-up Period
12.2.5 Output Vectors Recording Intervals
12.2.6 Recording Event Numbers in Output Vectors
12.2.7 Saving Parameters as Scalars
12.2.8 Recording Precision
12.3 Result Files
12.3.1 The OMNeT++ Result File Format
12.3.2 SQLite Result Files
12.3.3 Scavetool
12.4 Result Analysis
12.4.1 Python Packages
12.4.2 An Example Chart Script
12.5 Alternatives
13 Eventlog
13.1 Introduction
13.2 Configuration
13.2.1 File Name
13.2.2 Recording Intervals
13.2.3 Recording Modules
13.2.4 Recording Message Data
13.3 Eventlog Tool
13.3.1 Filter
13.3.2 Echo
14 Documenting NED and Messages
14.1 Overview
14.2 Documentation Comments
14.2.1 Private Comments
14.2.2 More on Comment Placement
14.3 Referring to Other NED and Message Types
14.3.1 Automatic Linking
14.3.2 Tilde Linking
14.4 Text Layout and Formatting
14.4.1 Paragraphs and Lists
14.4.2 Special Tags
14.4.3 Text Formatting Using HTML
14.4.4 Escaping HTML Tags
14.5 Incorporating Extra Content
14.5.1 Adding a Custom Title Page
14.5.2 Adding Extra Pages
14.5.3 Incorporating Externally Created Pages
14.5.4 File Inclusion
14.5.5 Extending Type Pages with Extra Content
15 Testing
15.1 Overview
15.1.1 Verification, Validation
15.1.2 Unit Testing, Regression Testing
15.2 The opp_test Tool
15.2.1 Introduction
15.2.2 Terminology
15.2.3 Test File Syntax
15.2.4 Test Description
15.2.5 Test Code Generation
15.2.6 PASS Criteria
15.2.7 Extra Processing Steps
15.2.8 Error
15.2.9 Expected Failure
15.2.10 Skipped
15.2.11 opp_test Synopsys
15.2.12 Writing the Control Script
15.3 Smoke Tests
15.4 Fingerprint Tests
15.4.1 Fingerprint Computation
15.4.2 Fingerprint Tests
15.5 Unit Tests
15.6 Module Tests
15.7 Statistical Tests
15.7.1 Validation Tests
15.7.2 Statistical Regression Tests
15.7.3 Implementation
16 Parallel Distributed Simulation
16.1 Introduction to Parallel Discrete Event Simulation
16.2 Assessing Available Parallelism in a Simulation Model
16.3 Parallel Distributed Simulation Support in OMNeT++
16.3.1 Overview
16.3.2 Parallel Simulation Example
16.3.3 Placeholder Modules, Proxy Gates
16.3.4 Configuration
16.3.5 Design of PDES Support in OMNeT++
17 Customizing and Extending OMNeT++
17.1 Overview
17.2 Adding a New Configuration Option
17.2.1 Registration
17.2.2 Reading the Value
17.3 Simulation Lifetime Listeners
17.4 cEvent
17.5 Defining a New Random Number Generator
17.6 Defining a New Event Scheduler
17.7 Defining a New FES Data Structure
17.8 Defining a New Fingerprint Algorithm
17.9 Defining a New Output Scalar Manager
17.10 Defining a New Output Vector Manager
17.11 Defining a New Eventlog Manager
17.12 Defining a New Snapshot Manager
17.13 Defining a New Configuration Provider
17.13.1 Overview
17.13.2 The Startup Sequence
17.13.3 Providing a Custom Configuration Class
17.13.4 Providing a Custom Reader for SectionBasedConfiguration
17.14 Implementing a New User Interface
18 Embedding the Simulation Kernel
18.1 Architecture
18.2 Embedding the OMNeT++ Simulation Kernel
18.2.1 The main() Function
18.2.2 The simulate() Function
18.2.3 Providing an Environment Object
18.2.4 Providing a Configuration Object
18.2.5 Loading NED Files
18.2.6 How to Eliminate NED Files
18.2.7 Assigning Module Parameters
18.2.8 Extracting Statistics from the Model
18.2.9 The Simulation Loop
18.2.10 Multiple, Coexisting Simulations
18.2.11 Installing a Custom Scheduler
18.2.12 Multi-Threaded Programs
19 Appendix A: NED Reference
19.1 Syntax
19.1.1 NED File Name Extension
19.1.2 NED File Encoding
19.1.3 Reserved Words
19.1.4 Identifiers
19.1.5 Case Sensitivity
19.1.6 Literals
19.1.7 Comments
19.1.8 Grammar
19.2 Built-in Definitions
19.3 Packages
19.3.1 Package Declaration
19.3.2 Directory Structure, package.ned
19.4 Components
19.4.1 Simple Modules
19.4.2 Compound Modules
19.4.3 Networks
19.4.4 Channels
19.4.5 Module Interfaces
19.4.6 Channel Interfaces
19.4.7 Resolving the C++ Implementation Class
19.4.8 Properties
19.4.9 Parameters
19.4.10 Pattern Assignments
19.4.11 Gates
19.4.12 Submodules
19.4.13 Connections
19.4.14 Conditional and Loop Connections, Connection Groups
19.4.15 Inner Types
19.4.16 Name Uniqueness
19.4.17 Parameter Assignment Order
19.4.18 Type Name Resolution
19.4.19 Resolution of Parametric Types
19.4.20 Implementing an Interface
19.4.21 Inheritance
19.4.22 Network Build Order
19.5 Expressions
19.5.1 Constants
19.5.2 Array and Object Values
19.5.3 Operators
19.5.4 Referencing Parameters and Loop Variables
19.5.5 The typename Operator
19.5.6 The index Operator
19.5.7 The exists() Operator
19.5.8 The sizeof() Operator
19.5.9 The expr() Operator
19.5.10 Functions
19.5.11 Units of Measurement
20 Appendix B: NED Language Grammar
21 Appendix C: NED XML Binding
22 Appendix D: NED Functions
22.1 Category "conversion":
22.2 Category "i/o":
22.3 Category "math":
22.4 Category "misc":
22.5 Category "ned":
22.6 Category "random/continuous":
22.7 Category "random/discrete":
22.8 Category "strings":
22.9 Category "units":
22.10 Category "xml":
22.11 Category "units/conversion":
23 Appendix E: Message Definitions Grammar
24 Appendix F: Message Class/Field Properties
25 Appendix G: Display String Tags
25.1 Module and Connection Display String Tags
25.2 Message Display String Tags
26 Appendix H: Figure Definitions
26.1 Built-in Figure Types
26.2 Attribute Types
26.3 Figure Attributes
27 Appendix I: Configuration Options
27.1 Configuration Options
27.2 Predefined Variables
28 Appendix J: Result File Formats
28.1 Native Result Files
28.1.1 Version
28.1.2 Run Declaration
28.1.3 Attributes
28.1.4 Iteration Variables
28.1.5 Configuration Entries
28.1.6 Scalar Data
28.1.7 Vector Declaration
28.1.8 Vector Data
28.1.9 Index Header
28.1.10 Index Data
28.1.11 Statistics Object
28.1.12 Field
28.1.13 Histogram Bin
28.2 SQLite Result Files
29 Appendix K: Eventlog File Format
29.1 Supported Entry Types and Their Attributes
30 Appendix L: Python API for Chart Scripts
30.1 Modules
30.1.1 Module omnetpp.scave.results
30.1.2 Module omnetpp.scave.chart
30.1.3 Module omnetpp.scave.ideplot
30.1.4 Module omnetpp.scave.utils
30.1.5 Module omnetpp.scave.vectorops
30.1.6 Module omnetpp.scave.analysis
30.1.7 Module omnetpp.scave.charttemplate
OMNeT++ is an object-oriented modular discrete event network simulation framework. It has a generic architecture, so it can be (and has been) used in various problem domains:
OMNeT++ itself is not a simulator of anything concrete, but rather provides infrastructure and tools for writing simulations. One of the fundamental ingredients of this infrastructure is a component architecture for simulation models. Models are assembled from reusable components termed modules. Well-written modules are truly reusable, and can be combined in various ways like LEGO blocks.
Modules can be connected with each other via gates (other systems would call them ports), and combined to form compound modules. The depth of module nesting is not limited. Modules communicate through message passing, where messages may carry arbitrary data structures. Modules can pass messages along predefined paths via gates and connections, or directly to their destination; the latter is useful for wireless simulations, for example. Modules may have parameters that can be used to customize module behavior and/or to parameterize the model's topology. Modules at the lowest level of the module hierarchy are called simple modules, and they encapsulate model behavior. Simple modules are programmed in C++, and make use of the simulation library.
OMNeT++ simulations can be run under various user interfaces. Graphical, animating user interfaces are highly useful for demonstration and debugging purposes, and command-line user interfaces are best for batch execution.
The simulator as well as user interfaces and tools are highly portable. They are tested on the most common operating systems (Linux, Mac OS/X, Windows), and they can be compiled out of the box or after trivial modifications on most Unix-like operating systems.
OMNeT++ also supports parallel distributed simulation. OMNeT++ can use several mechanisms for communication between partitions of a parallel distributed simulation, for example MPI or named pipes. The parallel simulation algorithm can easily be extended, or new ones can be plugged in. Models do not need any special instrumentation to be run in parallel -- it is just a matter of configuration. OMNeT++ can even be used for classroom presentation of parallel simulation algorithms, because simulations can be run in parallel even under the GUI that provides detailed feedback on what is going on.
OMNEST is the commercially supported version of OMNeT++. OMNeT++ is free only for academic and non-profit use; for commercial purposes, one needs to obtain OMNEST licenses from Simulcraft Inc.
The manual is organized as follows:
An OMNeT++ model consists of modules that communicate with message passing. The active modules are termed simple modules; they are written in C++, using the simulation class library. Simple modules can be grouped into compound modules and so forth; the number of hierarchy levels is unlimited. The whole model, called network in OMNeT++, is itself a compound module. Messages can be sent either via connections that span modules or directly to other modules. The concept of simple and compound modules is similar to DEVS atomic and coupled models.
In Fig. below, boxes represent simple modules (gray background) and compound modules. Arrows connecting small boxes represent connections and gates.
Modules communicate with messages that may contain arbitrary data, in addition to usual attributes such as a timestamp. Simple modules typically send messages via gates, but it is also possible to send them directly to their destination modules. Gates are the input and output interfaces of modules: messages are sent through output gates and arrive through input gates. An input gate and output gate can be linked by a connection. Connections are created within a single level of module hierarchy; within a compound module, corresponding gates of two submodules, or a gate of one submodule and a gate of the compound module can be connected. Connections spanning hierarchy levels are not permitted, as they would hinder model reuse. Because of the hierarchical structure of the model, messages typically travel through a chain of connections, starting and arriving in simple modules. Compound modules act like "cardboard boxes" in the model, transparently relaying messages between their inner realm and the outside world. Parameters such as propagation delay, data rate and bit error rate, can be assigned to connections. One can also define connection types with specific properties (termed channels) and reuse them in several places. Modules can have parameters. Parameters are used mainly to pass configuration data to simple modules, and to help define model topology. Parameters can take string, numeric, or boolean values. Because parameters are represented as objects in the program, parameters -- in addition to holding constants -- may transparently act as sources of random numbers, with the actual distributions provided with the model configuration. They may interactively prompt the user for the value, and they might also hold expressions referencing other parameters. Compound modules may pass parameters or expressions of parameters to their submodules.
OMNeT++ provides efficient tools for the user to describe the structure of the actual system. Some of the main features are the following:
An OMNeT++ model consists of hierarchically nested modules that communicate by passing messages to each other. OMNeT++ models are often referred to as networks. The top level module is the system module. The system module contains submodules that can also contain submodules themselves (Fig. below). The depth of module nesting is unlimited, allowing the user to reflect the logical structure of the actual system in the model structure.
Model structure is described in OMNeT++'s NED language.
Modules that contain submodules are termed compound modules, as opposed to simple modules at the lowest level of the module hierarchy. Simple modules contain the algorithms of the model. The user implements the simple modules in C++, using the OMNeT++ simulation class library.
Both simple and compound modules are instances of module types. In describing the model, the user defines module types; instances of these module types serve as components for more complex module types. Finally, the user creates the system module as an instance of a previously defined module type; all modules of the network are instantiated as submodules and sub-submodules of the system module.
When a module type is used as a building block, it makes no difference whether it is a simple or compound module. This allows the user to split a simple module into several simple modules embedded into a compound module, or vice versa, to aggregate the functionality of a compound module into a single simple module, without affecting existing users of the module type.
Module types can be stored in files separately from the place of their actual usage. This means that the user can group existing module types and create component libraries. This feature will be discussed later, in chapter [11].
Modules communicate by exchanging messages. In an actual simulation, messages can represent frames or packets in a computer network, jobs or customers in a queuing network or other types of mobile entities. Messages can contain arbitrarily complex data structures. Simple modules can send messages either directly to their destination or along a predefined path, through gates and connections.
The “local simulation time” of a module advances when the module receives a message. The message can arrive from another module or from the same module (self-messages are used to implement timers).
Gates are the input and output interfaces of modules; messages are sent out through output gates and arrive through input gates.
Each connection (also called link) is created within a single level of the module hierarchy: within a compound module, one can connect the corresponding gates of two submodules, or a gate of one submodule and a gate of the compound module (Fig. below).
Because of the hierarchical structure of the model, messages typically travel through a series of connections, starting and arriving in simple modules. Compound modules act like “cardboard boxes” in the model, transparently relaying messages between their inner realm and the outside world.
To facilitate the modeling of communication networks, connections can be used to model physical links. Connections support the following parameters: data rate, propagation delay, bit error rate and packet error rate, and may be disabled. These parameters and the underlying algorithms are encapsulated into channel objects. The user can parameterize the channel types provided by OMNeT++, and also create new ones.
When data rates are in use, a packet object is by default delivered to the target module at the simulation time that corresponds to the end of the packet reception. Since this behavior is not suitable for the modeling of some protocols (e.g. half-duplex Ethernet), OMNeT++ provides the possibility for the target module to specify that it wants the packet object to be delivered to it when the packet reception starts.
Modules can have parameters. Parameters can be assigned in either the NED files or the configuration file omnetpp.ini.
Parameters can be used to customize simple module behavior, and to parameterize the model topology.
Parameters can take string, numeric or boolean values, or can contain XML data trees. Numeric values include expressions using other parameters and calling C functions, random variables from different distributions, and values input interactively by the user.
Numeric-valued parameters can be used to construct topologies in a flexible way. Within a compound module, parameters can define the number of submodules, number of gates, and the way the internal connections are made.
The user defines the structure of the model in NED language descriptions (Network Description). The NED language will be discussed in detail in chapter [3].
The simple modules of a model contain algorithms as C++ functions. The full flexibility and power of the programming language can be used, supported by the OMNeT++ simulation class library. The simulation programmer can choose between event-driven and process-style description, and freely use object-oriented concepts (inheritance, polymorphism etc) and design patterns to extend the functionality of the simulator.
Simulation objects (messages, modules, queues etc.) are represented by C++ classes. They have been designed to work together efficiently, creating a powerful simulation programming framework. The following classes are part of the simulation class library:
The classes are also specially instrumented, allowing one to traverse objects of a running simulation and display information about them such as name, class name, state variables or contents. This feature makes it possible to create a simulation GUI where all internals of the simulation are visible.
This section provides insights into working with OMNeT++ in practice. Issues such as model files and compiling and running simulations are discussed.
An OMNeT++ model consists of the following parts:
The simulation system provides the following components:
Simulation programs are built from the above components. First, .msg files are translated into C++ code using the opp_msgc. program. Then all C++ sources are compiled and linked with the simulation kernel and a user interface library to form a simulation executable or shared library. NED files are loaded dynamically in their original text forms when the simulation program starts.
The simulation may be compiled as a standalone program executable, or as a shared library to be run using OMNeT++'s opp_run utility. When the program is started, it first reads the NED files, then the configuration file usually called omnetpp.ini. The configuration file contains settings that control how the simulation is executed, values for model parameters, etc. The configuration file can also prescribe several simulation runs; in the simplest case, they will be executed by the simulation program one after another.
The output of the simulation is written into result files: output vector files, output scalar files, and possibly the user's own output files. OMNeT++ contains an Integrated Development Environment (IDE) that provides rich environment for analyzing these files. Output files are line-oriented text files which makes it possible to process them with a variety of tools and programming languages as well, including Matlab, GNU R, Perl, Python, and spreadsheet programs.
The primary purpose of user interfaces is to make the internals of the model visible to the user, to control simulation execution, and possibly allow the user to intervene by changing variables/objects inside the model. This is very important in the development/debugging phase of the simulation project. Equally important, a hands-on experience allows the user to get a feel of the model's behavior. The graphical user interface can also be used to demonstrate a model's operation.
The same simulation model can be executed with various user interfaces, with no change in the model files themselves. The user would typically test and debug the simulation with a powerful graphical user interface, and finally run it with a simple, fast user interface that supports batch execution.
Module types can be stored in files separate from the place of their actual use, enabling the user to group existing module types and create component libraries.
A simulation executable can store several independent models that use the same set of simple modules. The user can specify in the configuration file which model is to be run. This allows one to build one large executable that contains several simulation models, and distribute it as a standalone simulation tool. The flexibility of the topology description language also supports this approach.
An OMNeT++ installation contains the following subdirectories. Depending on the platform, there may also be additional directories present, containing software bundled with OMNeT++.)
The simulation system itself:
omnetpp/ OMNeT++ root directory bin/ OMNeT++ executables include/ header files for simulation models lib/ library files images/ icons and backgrounds for network graphics doc/ manuals, readme files, license, APIs, etc. ide-customization-guide/ how to write new wizards for the IDE ide-developersguide/ writing extensions for the IDE manual/ manual in HTML ned2/ DTD definition of the XML syntax for NED files tictoc-tutorial/ introduction into using OMNeT++ api/ API reference in HTML nedxml-api/ API reference for the NEDXML library parsim-api/ API reference for the parallel simulation library src/ OMNeT++ sources sim/ simulation kernel parsim/ files for distributed execution netbuilder/files for dynamically reading NED files envir/ common code for user interfaces cmdenv/ command-line user interface qtenv/ Qt-based user interface nedxml/ NEDXML library, opp_nedtool, opp_msgtool scave/ result analysis library, opp_scavetool eventlog/ eventlog processing library layout/ graph layouter for network graphics common/ common library utils/ opp_makemake, opp_test, etc. ide/ Simulation IDE python/ Python libraries for OMNeT++ omnetpp/ Python package name scave/ Python API for result analysis ... test/ Regression test suite core/ tests for the simulation library anim/ tests for graphics and animation dist/ tests for the built-in distributions makemake/ tests for opp_makemake ...
The Eclipse-based Simulation IDE is in the ide directory.
ide/ Simulation IDE features/ Eclipse feature definitions plugins/ IDE plugins (extensions to the IDE can be dropped here) ...
The Windows version of OMNeT++ contains a redistribution of the MinGW gcc compiler, together with a copy of MSYS that provides Unix tools commonly used in Makefiles. The MSYS directory also contains various 3rd party open-source libraries needed to compile and run OMNeT++.
tools/ Platform specific tools and compilers (e.g. MinGW/MSYS on Windows)
Sample simulations are in the samples directory.
samples/ directories for sample simulations aloha/ models the Aloha protocol cqn/ Closed Queueing Network ...
The contrib directory contains material from the OMNeT++ community.
contrib/ directory for contributed material akaroa/ Patch to compile akaroa on newer gcc systems topologyexport/ Export the topology of a model in runtime ...
The user describes the structure of a simulation model in the NED language. NED stands for Network Description. NED lets the user declare simple modules, and connect and assemble them into compound modules. The user can label some compound modules as networks; that is, self-contained simulation models. Channels are another component type, whose instances can also be used in compound modules.
The NED language has several features which let it scale well to large projects:
The NED language has an equivalent tree representation which can be serialized to XML; that is, NED files can be converted to XML and back without loss of data, including comments. This lowers the barrier for programmatic manipulation of NED files; for example extracting information, refactoring and transforming NED, generating NED from information stored in other systems like SQL databases, and so on.
In this section we introduce the NED language via a complete and reasonably real-life example: a communication network.
Our hypothetical network consists of nodes. On each node there is an application running which generates packets at random intervals. The nodes are routers themselves as well. We assume that the application uses datagram-based communication, so that we can leave out the transport layer from the model.
First we'll define the network, then in the next sections we'll continue to define the network nodes.
Let the network topology be as in Figure below.
The corresponding NED description would look like this:
// // A network // network Network { submodules: node1: Node; node2: Node; node3: Node; ... connections: node1.port++ <--> {datarate=100Mbps;} <--> node2.port++; node2.port++ <--> {datarate=100Mbps;} <--> node4.port++; node4.port++ <--> {datarate=100Mbps;} <--> node6.port++; ... }
The above code defines a network type named Network. Note that the NED language uses the familiar curly brace syntax, and “//” to denote comments.
The network contains several nodes, named node1, node2, etc. from the NED module type Node. We'll define Node in the next sections.
The second half of the declaration defines how the nodes are to be connected. The double arrow means bidirectional connection. The connection points of modules are called gates, and the port++ notation adds a new gate to the port[] gate vector. Gates and connections will be covered in more detail in sections [3.7] and [3.9]. Nodes are connected with a channel that has a data rate of 100Mbps.
The above code would be placed into a file named Net6.ned. It is a convention to put every NED definition into its own file and to name the file accordingly, but it is not mandatory to do so.
One can define any number of networks in the NED files, and for every simulation the user has to specify which network to set up. The usual way of specifying the network is to put the network option into the configuration (by default the omnetpp.ini file):
[General] network = Network
It is cumbersome to have to repeat the data rate for every connection. Luckily, NED provides a convenient solution: one can create a new channel type that encapsulates the data rate setting, and this channel type can be defined inside the network so that it does not litter the global namespace.
The improved network will look like this:
// // A Network // network Network { types: channel C extends ned.DatarateChannel { datarate = 100Mbps; } submodules: node1: Node; node2: Node; node3: Node; ... connections: node1.port++ <--> C <--> node2.port++; node2.port++ <--> C <--> node4.port++; node4.port++ <--> C <--> node6.port++; ... }
Later sections will cover the concepts used (inner types, channels, the DatarateChannel built-in type, inheritance) in detail.
Simple modules are the basic building blocks for other (compound) modules, denoted by the simple keyword. All active behavior in the model is encapsulated in simple modules. Behavior is defined with a C++ class; NED files only declare the externally visible interface of the module (gates, parameters).
In our example, we could define Node as a simple module. However, its functionality is quite complex (traffic generation, routing, etc), so it is better to implement it with several smaller simple module types which we are going to assemble into a compound module. We'll have one simple module for traffic generation (App), one for routing (Routing), and one for queueing up packets to be sent out (Queue). For brevity, we omit the bodies of the latter two in the code below.
simple App { parameters: int destAddress; ... @display("i=block/browser"); gates: input in; output out; } simple Routing { ... } simple Queue { ... }
By convention, the above simple module declarations go into the App.ned, Routing.ned and Queue.ned files.
Let us look at the first simple module type declaration. App has a parameter called destAddress (others have been omitted for now), and two gates named out and in for sending and receiving application packets.
The argument of @display() is called a display string, and it defines the rendering of the module in graphical environments; "i=..." defines the default icon.
Generally, @-words like @display are called properties in NED, and they are used to annotate various objects with metadata. Properties can be attached to files, modules, parameters, gates, connections, and other objects, and parameter values have a very flexible syntax.
Now we can assemble App, Routing and Queue into the compound module Node. A compound module can be thought of as a “cardboard box” that groups other modules into a larger unit, which can further be used as a building block for other modules; networks are also a kind of compound module.
module Node { parameters: int address; @display("i=misc/node_vs,gold"); gates: inout port[]; submodules: app: App; routing: Routing; queue[sizeof(port)]: Queue; connections: routing.localOut --> app.in; routing.localIn <-- app.out; for i=0..sizeof(port)-1 { routing.out[i] --> queue[i].in; routing.in[i] <-- queue[i].out; queue[i].line <--> port[i]; } }
Compound modules, like simple modules, may have parameters and gates. Our Node module contains an address parameter, plus a gate vector of unspecified size, named port. The actual gate vector size will be determined implicitly by the number of neighbours when we create a network from nodes of this type. The type of port[] is inout, which allows bidirectional connections.
The modules that make up the compound module are listed under submodules. Our Node compound module type has an app and a routing submodule, plus a queue[] submodule vector that contains one Queue module for each port, as specified by [sizeof(port)]. (It is legal to refer to [sizeof(port)] because the network is built in top-down order, and the node is already created and connected at network level when its submodule structure is built out.)
In the connections section, the submodules are connected to each other and to the parent module. Single arrows are used to connect input and output gates, and double arrows connect inout gates, and a for loop is utilized to connect the routing module to each queue module, and to connect the outgoing/incoming link (line gate) of each queue to the corresponding port of the enclosing module.
We have created the NED definitions for this example, but how are they used by OMNeT++? When the simulation program is started, it loads the NED files. The program should already contain the C++ classes that implement the needed simple modules, App, Routing and Queue; their C++ code is either part of the executable or is loaded from a shared library. The simulation program also loads the configuration (omnetpp.ini), and determines from it that the simulation model to be run is the Network network. Then the network is instantiated for simulation.
The simulation model is built in a top-down preorder fashion. This means that starting from an empty system module, all submodules are created, their parameters and gate vector sizes are assigned, and they are fully connected before the submodule internals are built.
In the following sections we'll go through the elements of the NED language and look at them in more detail.
Simple modules are the active components in the model. Simple modules are defined with the simple keyword.
An example simple module:
simple Queue { parameters: int capacity; @display("i=block/queue"); gates: input in; output out; }
Both the parameters and gates sections are optional, that is, they can be left out if there is no parameter or gate. In addition, the parameters keyword itself is optional too; it can be left out even if there are parameters or properties.
Note that the NED definition doesn't contain any code to define the operation of the module: that part is expressed in C++. By default, OMNeT++ looks for C++ classes of the same name as the NED type (so here, Queue).
One can explicitly specify the C++ class with the @class property. Classes with namespace qualifiers are also accepted, as shown in the following example that uses the mylib::Queue class:
simple Queue { parameters: int capacity; @class(mylib::Queue); @display("i=block/queue"); gates: input in; output out; }
If there are several modules whose C++ implementation classes are in the same namespace, a better alternative to @class is the @namespace property. The C++ namespace given with @namespace will be prepended to the normal class name. In the following example, the C++ classes will be mylib::App, mylib::Router and mylib::Queue:
@namespace(mylib); simple App { ... } simple Router { ... } simple Queue { ... }
The @namespace property may not only be specified at file level as in the above example, but for packages as well. When placed in a file called package.ned, the namespace will apply to all components in that package and below.
The implementation C++ classes need to be subclassed from the cSimpleModule library class; chapter [4] of this manual describes in detail how to write them.
Simple modules can be extended (or specialized) via subclassing. The motivation for subclassing can be to set some open parameters or gate sizes to a fixed value (see [3.6] and [3.7]), or to replace the C++ class with a different one. Now, by default, the derived NED module type will inherit the C++ class from its base, so it is important to remember that you need to write out @class if you want it to use the new class.
The following example shows how to specialize a module by setting a parameter to a fixed value (and leaving the C++ class unchanged):
simple Queue { int capacity; ... } simple BoundedQueue extends Queue { capacity = 10; }
In the next example, the author wrote a PriorityQueue C++ class, and wants to have a corresponding NED type, derived from Queue. However, it does not work as expected:
simple PriorityQueue extends Queue // wrong! still uses the Queue C++ class { }
The correct solution is to add a @class property to override the inherited C++ class:
simple PriorityQueue extends Queue { @class(PriorityQueue); }
Inheritance in general will be discussed in section [3.13].
A compound module groups other modules into a larger unit. A compound
module may have gates and parameters like a simple module, but no active
behavior is associated with it.
A compound module declaration may contain several sections, all of them optional:
module Host { types: ... parameters: ... gates: ... submodules: ... connections: ... }
Modules contained in a compound module are called submodules, and they are listed in the submodules section. One can create arrays of submodules (i.e. submodule vectors), and the submodule type may come from a parameter.
Connections are listed under the connections section of the declaration. One can create connections using simple programming constructs (loop, conditional). Connection behaviour can be defined by associating a channel with the connection; the channel type may also come from a parameter.
Module and channel types only used locally can be defined in the types section as inner types, so that they do not pollute the namespace.
Compound modules may be extended via subclassing. Inheritance may add new submodules and new connections as well, not only parameters and gates. Also, one may refer to inherited submodules, to inherited types etc. What is not possible is to "de-inherit" or modify submodules or connections.
In the following example, we show how to assemble common protocols
into a "stub" for wireless hosts, and add user agents via
subclassing.
module WirelessHostBase { gates: input radioIn; submodules: tcp: TCP; ip: IP; wlan: Ieee80211; connections: tcp.ipOut --> ip.tcpIn; tcp.ipIn <-- ip.tcpOut; ip.nicOut++ --> wlan.ipIn; ip.nicIn++ <-- wlan.ipOut; wlan.radioIn <-- radioIn; } module WirelessHost extends WirelessHostBase { submodules: webAgent: WebAgent; connections: webAgent.tcpOut --> tcp.appIn++; webAgent.tcpIn <-- tcp.appOut++; }
The WirelessHost compound module can further be extended, for example with an Ethernet port:
module DesktopHost extends WirelessHost { gates: inout ethg; submodules: eth: EthernetNic; connections: ip.nicOut++ --> eth.ipIn; ip.nicIn++ <-- eth.ipOut; eth.phy <--> ethg; }
Channels encapsulate parameters and behaviour associated with connections. Channels are like simple modules, in the sense that there are C++ classes behind them. The rules for finding the C++ class for a NED channel type is the same as with simple modules: the default class name is the NED type name unless there is a @class property (@namespace is also recognized), and the C++ class is inherited when the channel is subclassed.
Thus, the following channel type would expect a CustomChannel C++ class to be present:
channel CustomChannel // requires a CustomChannel C++ class { }
The practical difference compared to modules is that one rarely needs to write custom channel C++ class because there are predefined channel types that one can subclass from, inheriting their C++ code. The predefined types are: ned.IdealChannel, ned.DelayChannel and ned.DatarateChannel. (“ned” is the package name; one can get rid of it by importing the types with the import ned.* directive. Packages and imports are described in section [3.14].)
IdealChannel has no parameters, and lets through all messages without delay or any side effect. A connection without a channel object and a connection with an IdealChannel behave in the same way. Still, IdealChannel has its uses, for example when a channel object is required so that it can carry a new property or parameter that is going to be read by other parts of the simulation model.
DelayChannel has two parameters:
DatarateChannel has a few additional parameters compared to DelayChannel:
The following example shows how to create a new channel type by specializing DatarateChannel:
channel Ethernet100 extends ned.DatarateChannel { datarate = 100Mbps; delay = 100us; ber = 1e-10; }
One may add parameters and properties to channels via subclassing, and may modify existing ones. In the following example, we introduce distance-based calculation of the propagation delay:
channel DatarateChannel2 extends ned.DatarateChannel { double distance @unit(m); delay = this.distance / 200000km * 1s; }
Parameters are primarily intended to be read by the underlying C++ class, but new parameters may also be added as annotations to be used by other parts of the model. For example, a cost parameter may be used for routing decisions in routing module, as shown in the example below. The example also shows annotation using properties (@backbone).
channel Backbone extends ned.DatarateChannel { @backbone; double cost = default(1); }
Parameters are variables that belong to a module. Parameters can be used in building the topology (number of nodes, etc), and to supply input to C++ code that implements simple modules and channels.
Parameters can be of type double, int, bool, string, xml and object; they can also be declared volatile. For the numeric types, a unit of measurement can also be specified (@unit property).
Parameters can get their value from NED files or from the configuration (omnetpp.ini). A default value can also be given (default(...)), which is used if the parameter is not assigned otherwise.
The following example shows a simple module that has five parameters, three of which have default values:
simple App { parameters: string protocol; // protocol to use: "UDP" / "IP" / "ICMP" / ... int destAddress; // destination address volatile double sendInterval @unit(s) = default(exponential(1s)); // time between generating packets volatile int packetLength @unit(byte) = default(100B); // length of one packet volatile int timeToLive = default(32); // maximum number of network hops to survive gates: input in; output out; }
Parameters may get their values in several ways: from NED code, from the configuration (omnetpp.ini), or even, interactively from the user. NED lets one assign parameters at several places: in subclasses via inheritance; in submodule and connection definitions where the NED type is instantiated; and in networks and compound modules that directly or indirectly contain the corresponding submodule or connection.
For instance, one could specialize the above App module type via inheritance with the following definition:
simple PingApp extends App { parameters: protocol = "ICMP/ECHO" sendInterval = default(1s); packetLength = default(64byte); }
This definition sets the protocol parameter to a fixed value ("ICMP/ECHO"), and changes the default values of the sendInterval and packetLength parameters. protocol is now locked down in PingApp, its value cannot be modified via further subclassing or other ways. sendInterval and packetLength are still unassigned here, only their default values have been overwritten.
Now, let us see the definition of a Host compound module that uses PingApp as submodule:
module Host { submodules: ping : PingApp { packetLength = 128B; // always ping with 128-byte packets } ... }
This definition sets the packetLength parameter to a fixed value. It is now hardcoded that Hosts send 128-byte ping packets; this setting cannot be changed from NED or the configuration.
It is not only possible to set a parameter from the compound module that contains the submodule, but also from modules higher up in the module tree. A network that employs several Host modules could be defined like this:
network Network { submodules: host[100]: Host { ping.timeToLive = default(3); ping.destAddress = default(0); } ... }
Parameter assignment can also be placed into the parameters block of the parent compound module, which provides additional flexibility. The following definition sets up the hosts so that half of them pings host #50, and the other half pings host #0:
network Network { parameters: host[*].ping.timeToLive = default(3); host[0..49].ping.destAddress = default(50); host[50..].ping.destAddress = default(0); submodules: host[100]: Host; ... }
Note the use of asterisk to match any index, and .. to match index ranges.
If there were a number of individual hosts instead of a submodule vector, the network definition could look like this:
network Network { parameters: host*.ping.timeToLive = default(3); host{0..49}.ping.destAddress = default(50); host{50..}.ping.destAddress = default(0); submodules: host0: Host; host1: Host; host2: Host; ... host99: Host; }
An asterisk matches any substring not containing a dot, and a .. within a pair of curly braces matches a natural number embedded in a string.
In most assigments we have seen above, the left hand side of the equal sign contained a dot and often a wildcard as well (asterisk or numeric range); we call these assignments pattern assignments or deep assignments.
There is one more wildcard that can be used in pattern assignments, and this is the double asterisk; it matches any sequence of characters including dots, so it can match multiple path elements. An example:
network Network { parameters: **.timeToLive = default(3); **.destAddress = default(0); submodules: host0: Host; host1: Host; ... }
Note that some assignments in the above examples changed default values, while others set parameters to fixed values. Parameters that received no fixed value in the NED files can be assigned from the configuration (omnetpp.ini).
A parameter can be assigned in the configuration using a similar syntax as NED pattern assignments (actually, it would be more historically accurate to say it the other way round, that NED pattern assignments use a similar syntax to ini files):
Network.host[*].ping.sendInterval = 500ms # for the host[100] example Network.host*.ping.sendInterval = 500ms # for the host0,host1,... example **.sendInterval = 500ms
One often uses the double asterisk to save typing. One can write
**.ping.sendInterval = 500ms
Or if one is certain that only ping modules have sendInterval parameters, the following will suffice:
**.sendInterval = 500ms
Parameter assignments in the configuration are described in section [10.3].
One can also write expressions, including stochastic expressions, in NED files and in ini files as well. For example, here's how one can add jitter to the sending of ping packets:
**.sendInterval = 1s + normal(0s, 0.001s) # or just: normal(1s, 0.001s)
If there is no assignment for a parameter in NED or in the ini file, the default value (given with =default(...) in NED) will be applied implicitly. If there is no default value, the user will be asked, provided the simulation program is allowed to do that; otherwise there will be an error. (Interactive mode is typically disabled for batch executions where it would do more harm than good.)
It is also possible to explicitly apply the default (this can sometimes be useful):
**.sendInterval = default
Finally, one can explicitly ask the simulator to prompt the user interactively for the value (again, provided that interactivity is enabled; otherwise this will result in an error):
**.sendInterval = ask
Parameter values may be given with expressions. NED language expressions have a
C-like syntax, with additions like quantities (numbers with measurement units,
e.g. 100Gbps) and JSON constructs. Compared to C, there are some
variations on operator names: binary and logical XOR are # and
##, while ^ has been reassigned to power-of instead. The
+ operator does string concatenation as well as numeric addition. There
are two extra operators: <=> (“spaceship”) and =
The spaceship operator <=> compares its two arguments and returns the result (“less”, “equal”, “greater” and “not applicable”) in the form of a negative, zero, positive or nan double number, respectively.
2 <=> 2 // --> 0 10 <=> 5 // --> 1 2 <=> nan // --> nan
The string match operator =
"foo" =~ "f*" // --> true "foo" =~ "b*" // --> false "foo" =~ "F*" // --> false "foo.bar.baz" =~ "*.baz" // --> false "foo.bar.baz" =~ "**.baz" // --> true "foo[15]" =~ "foo[5..20]" // --> true "foo15" =~ "foo{5..20}" // --> true
Expressions may refer to module parameters, gate vector and module vector sizes (using the sizeof operator), existence of a submodule or submodule vector (exists operator), and the index of the current module in a submodule vector (index).
The special operator expr() can be used to pass a formula into a module as a parameter ([3.6.9]).
Expressions may also utilize various numeric, string, stochastic and miscellaneous other functions (fabs(), uniform(), lognormal(), substring(), readFile(), etc.).
Expressions may refer to parameters of the compound module being defined, parameters of the current module, and to parameters of already defined submodules, with the syntax submodule.parametername (or submodule[index].parametername).
Unqualified parameter names refer to a parameter of the compound module, wherever it occurs within the compound module definition. For example, all foo references in the following example refer to the network's foo parameter.
network Network { parameters: double foo; double bar = foo; submodules: node[10]: Node { baz = foo; } ... }
Use the this qualifier to refer to another parameter of the same submodule:
submodules: node: Node { datarate = this.amount / this.duration; }
From OMNeT++ 5.7 onwards, there is also a parent qualifier with the obvious meaning.
Volatile parameters are those marked with the volatile modifier keyword. Normally, expressions assigned to parameters are evaluated once, and the resulting values are stored in the parameters. In contrast, a volatile parameter holds the expression itself, and it is evaluated every time the parameter is read. Therefore, if the expression contains a stochastic or changing component, such as normal(0,1) (a random value from the unit normal distribution) or simTime() (the current simulation time), reading the parameter may yield a different value every time.
If a parameter is marked volatile, the C++ code that implements the corresponding module is expected to re-read the parameter every time a new value is needed, as opposed to reading it once and caching the value in a variable.
To demonstrate the use of volatile, suppose we have a Queue simple module that has a volatile double parameter named serviceTime.
simple Queue { parameters: volatile double serviceTime; }
Because of the volatile modifier, the C++ code underlying the queue module is supposed read the serviceTime parameter for every job serviced. Thus, if a stochastic value like uniform(0.5s, 1.5s) is assigned the parameter, the expression will be evaluated every time, and every job will likely have a different, random service time.
As another example, here's how one can have a time-varying parameter by exploiting the simTime() NED function:
**.serviceTime = simTime()<1000s ? 1s : 2s # queue that slows down after 1000s
A parameter is marked as mutable by adding the @mutable property to it. Mutable parameters can be set to a different value during runtime, whereas normal, i.e. non-mutable parameters cannot be changed after their initial assignment (attempts to do so will result in an error being raised).
Parameter mutability addresses the fact that although it would be technically possible to allow changing the value of any parameter to a different value during runtime, it only really makes sense to do so if the change actually takes effect. Otherwise, users doing the change could be mislead.
For example, if a module is implemented in C++ in a way that it only reads a parameter once and then uses the cached value throughout, it would be misleading to allow changing the parameter's value during simulation. For a parameter to rightfully be marked as @mutable, module's implementation has to be explicitly prepared to handle runtime parameter changes (see section [4.5.5]).
As a practical example, a drop-tail queue module could have a maxLength parameter which controls the maximum number of elements the queue can hold. If it was allowed to set the maxLength parameter to a different value at runtime but the module would continue to operate according to the initially configured value throughout the entire simulation, that could falsify simulation results.
simple Queue { parameters: int maxLength @mutable; // @mutable indicates that Queue's // implementation is prepared for handling // runtime changes in the value of the // maximum queue length. ... }
In a model framework that contains a large number of modules with many parameters, the presence or absence of @mutable allows the user to know which are the parameters whose runtime changes are properly handled by their modules. This is an important input for determining what kinds of experiments can be done with the model.
One can declare a parameter to have an associated unit of measurement by adding the @unit property. An example:
simple App { parameters: volatile double sendInterval @unit(s) = default(exponential(350ms)); volatile int packetLength @unit(byte) = default(4KiB); ... }
The @unit(s) and @unit(byte) declarations specify the measurement unit for the parameter. Values assigned to parameters must have the same or compatible unit, i.e. @unit(s) accepts milliseconds, nanoseconds, minutes, hours, etc., and @unit(byte) accepts kilobytes, megabytes, etc. as well.
The OMNeT++ runtime does a full and rigorous unit check on parameters to ensure “unit safety” of models. Constants should always include the measurement unit.
The @unit property of a parameter cannot be added or overridden in subclasses or in submodule declarations.
OMNeT++ supports two explicit ways of passing structured data to a module using parameters: XML parameters, and object parameters with JSON-style structured data. This section describes the former, and the next one the latter.
XML parameters are declared with the keyword xml. When using XML parameters, OMNeT++ will read the XML document for you, DTD-validates it (if it contains a DOCTYPE), and presents the contents in a DOM-like object tree. It is also possible to assign a part (i.e. subtree) of the document to the parameter; the subset can be selected using an XPath-subset notation. OMNeT++ caches the content of the document, so it is loaded only once even if it is referenced by multiple parameters.
Values for an XML parameter can be produced using the xmldoc() and the xml() functions. xmldoc() accepts a filename as argument, while xml() parses its string argument as XML content. Of course, one can assign xml parameters both from NED and from omnetpp.ini.
The following example declares an xml parameter, and assigns the contents of an XML file to it. The file name is understood as being relative to the working directory.
simple TrafGen { parameters: xml profile; gates: output out; } module Node { submodules: trafGen1 : TrafGen { profile = xmldoc("data.xml"); } ... }
xmldoc() also lets one select an element within an XML document. In case a simulation model contains numerous modules that need XML input, this feature allows the user get rid of the many small XML files by aggregating them into a single XML file. For example, the following XML file contains two profiles identified with the IDs gen1 and gen2:
<?xml> <root> <profile id="gen1"> <param>3</param> <param>5</param> </profile> <profile id="gen2"> <param>9</param> </profile> </root>
And one can assign each profile to a corresponding submodule using an XPath-like expression:
module Node { submodules: trafGen1 : TrafGen { profile = xmldoc("all.xml", "/root/profile[@id='gen1']"); } trafGen2 : TrafGen { profile = xmldoc("all.xml", "/root/profile[@id='gen2']"); } }
The following example shows how specify XML content using a string literal, with the xml() function. This is especially useful for specifying a default value.
simple TrafGen { parameters: xml profile = xml("<root/>"); // empty document as default ... }
The xml() function, like xmldoc(), also supports an optional second XPath parameter for selecting a subtree.
Object parameters are declared with the keyword object. The values of object parameters are C++ objects, which can hold arbitrary data and can be constructed in various ways in NED. Although object parameters were introduced in OMNeT++ only in version 6.0, they are now the preferred way of passing structured data to modules.
There are two basic constructs in NED for creating objects: the array and the object syntax. The array syntax is a pair of square brackets that encloses the list of comma-separated array elements: [ value1, value2, ... ]. The object (a.k.a. dictionary) syntax uses curly braces around key-value pairs, the separators being colon and comma: { key1 : value1, key2:value2, ... }. These constructs can be composed, so an array may contain objects and further arrays as elements, and similarly, an object may contain arrays and further objects as values, and so on. This allows describing complex data structures, with a JSON-like notation.
The notation is only JSON-like, as the syntax rules are more relaxed than in JSON. All valid JSON is accepted, but also some more. The main difference is that in JSON, values in arrays and objects may only be constants or null, while OMNeT++ allows NED expressions as values: quantities, nan/inf, parameter references, functions, arithmetic operations, etc., are all accepted.
An extra relaxation and convenience compared to strict JSON is that quotation marks around object keys may be left out, as long as the key complies with the identifier syntax.
Another extension is that for objects, the desired C++ class may be specified in front of the open curly brace: classname { key1 : value1, ... }. The object will be created and filled in using OMNeT++'s reflection features. This allows internal data structures of modules to be filled out directly, so it eliminates most of the "parsing" code which is otherwise necessary. More about this feature will be written in the chapter about C++ programming (section [4.5.4]).
Object parameters with JSON-style values obsolete several workarounds that were used in pre-6.0 OMNeT++ versions for passing structured data to modules, for example using strings to specify numeric arrays, or using text files of ad-hoc syntax as configuration or data files. JSON-style values are also more convenient that XML input.
After this introduction, let us see some examples! We begin with a list of completely made-up object parameter assignments, to show the syntax and the possibilities:
simple Example { parameters: object array1 = []; // empty array object array2 = [2, 5, 3, -1]; // array of integers object array3 = [ 3, 24.5mW, "Hello", false, true ]; // misc array object array4 = [ nan, inf, inf s, null, nullptr ]; // special values object object1 = {}; // empty object object object2 = { foo: 100, bar: "Hello" }; // object with 2 fields object object3 = { "foo": 100, "bar": "Hello" }; // keys with quotes // composition of objects and arrays object array5 = [ [1,2,3], [4,5,6], [7,8,9] ]; object array6 = [ { foo: 100, bar: "Hello" }, { baz: false } ]; object object4 = { foo : [1,2,3], bar : [4,5,6] }; object object5 = { obj : { foo: 1, bar: 2 }, array: [1, 2, 3 ] }; // expression, parameter references double x = default(1); object misc = [ x, 2*x, floor(3.14), uniform(0,10) ]; // [1,2,3,?] // default values object default1 = default([]); // empty array by default object default2 = default({}); // empty object by default object default3 = default([1,2,3]); // some array by default object default4 = default(nullptr); // null pointer by default }
The following, more practical example demonstrates how one could describe an IPv4 routing table. Each route is represented as an object, and the table itself is represented as an array of routes.
object routes = [ { dest: "10.0.0.0", netmask: "255.255.0.0", interf: "eth0", metric:10 }, { dest: "10.1.0.0", netmask: "255.255.0.0", interf: "eth1", metric:20 }, { dest: "*", interf: "eth2" }, ];
The next example shows the use of the extended object syntax for specifying a "template" for the packets that a traffic source module should generate. Note the stochastic expression for the byteLength field, and that the parameter is declared as volatile. Every time the module needs to send a packet, its C++ code should read the packetToSend parameter, which will cause the expression to be evaluated and a new packet of random length to be created that the module can send.
simple TrafficSource { parameters: volatile object packetToSend = default(cPacket { name: "data", kind: 10, byteLength: intuniform(64,4096) }); volatile double sendInterval @unit(s) = default(exponential(100ms)); }
Another traffic source module that supports a predetermined schedule of what to send at which points in time could have the following parameter to describe the schedule:
object sendSchedule = [ { time: 1s, pk: cPacket { name: "pk1", byteLength: 64 } }, { time: 2s, pk: cPacket { name: "pk2", byteLength: 76 } }, { time: 3s, pk: cPacket { name: "pk3", byteLength: 32 } }, ];
In the next example, we want to pass a trail given with its waypoints to a module. The module will get the data in an instance of a Trail C++ class expressly created for this purpose. This means that the module will get the trail data in a ready-to-use form just by reading parameter, without having to do any parsing or additional processing.
We use a message file (chapter [5]) to define the classes; the C++ classes will be automatically generated by OMNeT++ from it.
// file: Trail.msg struct Point { double x; double y; } class Trail extends cObject { Point waypoints[]; }
An actual trail can be specified in NED like this:
object trail = Trail { waypoints: [ { x: 1, y : 5 }, { x: 4, y : 6 }, { x: 3, y : 8 }, { x: 5, y : 3 } ] };
Values for object parameters may also be placed in ini files, just like values for other parameter types. In ini files, indented lines are treated as continuations of the previous line, so the above example doesn't need trailing backslashes when moved to omnetpp.ini:
**.trail = Trail { waypoints: [ { x: 1, y : 5 }, { x: 4, y : 6 }, { x: 3, y : 8 }, { x: 5, y : 3 } ] }
The special operator expr() allows one to pass a formula into a module as a parameter. expr() takes an expression as argument, which syntactically must correspond to the general syntax of NED expressions. However, it is not a normal NED expression: it will not be interpreted and evaluated as one. Instead, it will be encapsulated into, and returned as, an object, and typically assigned to a module parameter.
The module may access the object via the parameter, and may evaluate the expression encapsulated in it any number of times during simulation. While doing so, the module's code can freely determine how various identifiers and other syntactical elements in the expression are to be interpreted.
Let us see a practical example. In the model of a wireless network, one of the tasks is to compute the path loss suffered by each wirelessly transmitted frame, as part of the procedure to determine whether the frame could be successfully received by the receiver node. There are several formulas for computing the path loss (free space, two-ray ground reflection, etc), and it depends on multiple factors which one to use. If the model author wants to leave it open for their users to specify the formula they want to use, they might define the model like so:
simple RadioMedium { parameters: object pathLoss; // =expr(...): formula to compute path loss ... }
The pathLoss parameter expects the formula to be given with expr(). The formula is expected to contain two variables, distance and frequency, which stand for the distance between the transmitter and the receiver and the packet transmission frequency, respectively. The module would evaluate the expression for each frame, binding values that correspond to the current frame to those variables.
Given the above, free space path loss would be specified to the module with the following formula (assuming isotropic antennas with the same polarization, etc.):
**.pathLoss = expr((4 * 3.14159 * distance * frequency / c) ^ 2)
The next example is borrowed from the INET Framework, which extensively uses expr() for specifying packet filter conditions. A few examples:
expr(hasBitError) expr(name == 'P1') expr(name =~ 'P*') expr(totalLength == 128B) expr(ipv4.destAddress.str() == '10.0.0.1' && udp.destPort == 42)
The interesting part is that the packet itself does not appear explicitly in the expressions. Instead, identifiers like hasBitError and name are interpreted as attributes of the packet, as if the user had written e.g. pk.hasBitError and pk.name. Similarly, ipv4 and udp stand for the IPv4 and UDP headers of the packet. The last line also shows that the interpretation of member accesses and method calls is also in the hands of the module's code.
The details of implementing expr() support in modules will be described as part of the simulation library, in section [7.8].
Gates are the connection points of modules. OMNeT++ has three types of gates: input, output and inout, the latter being essentially an input and an output gate glued together.
A gate, whether input or output, can only be connected to one other gate. (For compound module gates, this means one connection “outside” and one “inside”.) It is possible, though generally not recommended, to connect the input and output sides of an inout gate separately (see section [3.9]).
One can create single gates and gate vectors. The size of a gate vector can be given inside square brackets in the declaration, but it is also possible to leave it open by just writing a pair of empty brackets (“[]”).
When the gate vector size is left open, one can still specify it later, when subclassing the module, or when using the module for a submodule in a compound module. However, it does not need to be specified because one can create connections with the gate++ operator that automatically expands the gate vector.
The gate size can be queried from various NED expressions with the sizeof() operator.
NED normally requires that all gates be connected. To relax this requirement, one can annotate selected gates with the @loose property, which turns off the connectivity check for that gate. Also, input gates that solely exist so that the module can receive messages via sendDirect() (see [4.7.5]) should be annotated with @directIn. It is also possible to turn off the connectivity check for all gates within a compound module by specifying the allowunconnected keyword in the module's connections section.
Let us see some examples.
In the following example, the Classifier module has one input for receiving jobs, which it will send to one of the outputs. The number of outputs is determined by a module parameter:
simple Classifier { parameters: int numCategories; gates: input in; output out[numCategories]; }
The following Sink module also has its in[] gate defined as a vector, so that it can be connected to several modules:
simple Sink { gates: input in[]; }
The following lines define a node for building a square grid. Gates around the edges of the grid are expected to remain unconnected, hence the @loose annotation:
simple GridNode { gates: inout neighbour[4] @loose; }
WirelessNode below is expected to receive messages (radio transmissions) via direct sending, so its radioIn gate is marked with @directIn.
simple WirelessNode { gates: input radioIn @directIn; }
In the following example, we define TreeNode as having gates to connect any number of children, then subclass it to get a BinaryTreeNode to set the gate size to two:
simple TreeNode { gates: inout parent; inout children[]; } simple BinaryTreeNode extends TreeNode { gates: children[2]; }
An example for setting the gate vector size in a submodule, using the same TreeNode module type as above:
module BinaryTree { submodules: nodes[31]: TreeNode { gates: children[2]; } connections: ... }
Modules that a compound module is composed of are called its submodules. A submodule has a name, and it is an instance of a compound or simple module type. In the NED definition of a submodule, this module type is usually given statically, but it is also possible to specify the type with a string expression. (The latter feature, parametric submodule types, will be discussed in section [3.11.1].)
NED supports submodule arrays (vectors) and conditional submodules as well. Submodule vector size, unlike gate vector size, must always be specified and cannot be left open as with gates.
It is possible to add new submodules to an existing compound module via subclassing; this has been described in the section [3.4].
The basic syntax of submodules is shown below:
module Node { submodules: routing: Routing; // a submodule queue[sizeof(port)]: Queue; // submodule vector ... }
As already seen in previous code examples, a submodule may also have a curly brace block as body, where one can assign parameters, set the size of gate vectors, and add/modify properties like the display string (@display). It is not possible to add new parameters and gates.
Display strings specified here will be merged with the display string from the type to get the effective display string. The merge algorithm is described in chapter [8].
module Node { gates: inout port[]; submodules: routing: Routing { parameters: // this keyword is optional routingTable = "routingtable.txt"; // assign parameter gates: in[sizeof(port)]; // set gate vector size out[sizeof(port)]; } queue[sizeof(port)]: Queue { @display("t=queue id $id"); // modify display string id = 1000+index; // use submodule index to generate different IDs } connections: ... }
An empty body may be omitted, that is,
queue: Queue;
is the same as
queue: Queue { }
A submodule or submodule vector can be conditional. The if keyword and the condition itself goes after the submodule type, like in the example below:
module Host { parameters: bool withTCP = default(true); submodules: tcp : TCP if withTCP; ... }
Note that with submodule vectors, setting zero vector size can be used as an alternative to the if condition.
Connections are defined in the connections section of compound modules. Connections cannot span across hierarchy levels; one can connect two submodule gates, a submodule gate and the "inside" of the parent (compound) module's gates, or two gates of the parent module (though this is rarely useful), but it is not possible to connect to any gate outside the parent module, or inside compound submodules.
Input and output gates are connected with a normal arrow, and inout gates with a double-headed arrow “<-->”. To connect the two gates with a channel, use two arrows and put the channel specification in between. The same syntax is used to add properties such as @display to the connection.
Some examples have already been shown in the NED Quickstart section ([3.2]); let's see some more.
It has been mentioned that an inout gate is basically an input and an output gate glued together. These sub-gates can also be addressed (and connected) individually if needed, as port$i and port$o (or for vector gates, as port$i[$k$] and port$o[k]).
Gates are specified as modulespec.gatespec (to connect a submodule), or as gatespec (to connect the compound module). modulespec is either a submodule name (for scalar submodules), or a submodule name plus an index in square brackets (for submodule vectors). For scalar gates, gatespec is the gate name; for gate vectors it is either the gate name plus an index in square brackets, or gatename++.
The gatename++ notation causes the first unconnected gate index to be used. If all gates of the given gate vector are connected, the behavior is different for submodules and for the enclosing compound module. For submodules, the gate vector expands by one. For a compound module, after the last gate is connected, ++ will stop with an error.
When the ++ operator is used with $i or $o (e.g. g$i++ or g$o++, see later), it will actually add a gate pair (input+output) to maintain equal gate sizes for the two directions.
Channel specifications (-->channelspec--> inside a connection) are similar to submodules in many respect. Let's see some examples!
The following connections use two user-defined channel types, Ethernet100 and Backbone. The code shows the syntax for assigning parameters (cost and length) and specifying a display string (and NED properties in general):
a.g++ <--> Ethernet100 <--> b.g++; a.g++ <--> Backbone {cost=100; length=52km; ber=1e-8;} <--> b.g++; a.g++ <--> Backbone {@display("ls=green,2");} <--> b.g++;
When using built-in channel types, the type name can be omitted; it will be inferred from the parameter names.
a.g++ <--> {delay=10ms;} <--> b.g++; a.g++ <--> {delay=10ms; ber=1e-8;} <--> b.g++; a.g++ <--> {@display("ls=red");} <--> b.g++;
If datarate, ber or per is assigned, ned.DatarateChannel will be chosen. Otherwise, if delay or disabled is present, it will be ned.DelayChannel; otherwise it is ned.IdealChannel. Naturally, if other parameter names are assigned in a connection without an explicit channel type, it will be an error (with “ned.DelayChannel has no such parameter” or similar message).
Connection parameters, similarly to submodule parameters, can also be assigned using pattern assignments, albeit the channel names to be matched with patterns are a little more complicated and less convenient to use. A channel can be identified with the name of its source gate plus the channel name; the channel name is currently always channel. It is illustrated by the following example:
module Queueing { parameters: source.out.channel.delay = 10ms; queue.out.channel.delay = 20ms; submodules: source: Source; queue: Queue; sink: Sink; connections: source.out --> ned.DelayChannel --> queue.in; queue.out --> ned.DelayChannel <--> sink.in;
Using bidirectional connections is a bit trickier, because both directions must be covered separately:
network Network { parameters: hostA.g$o[0].channel.datarate = 100Mbps; // the A -> B connection hostB.g$o[0].channel.datarate = 100Mbps; // the B -> A connection hostA.g$o[1].channel.datarate = 1Gbps; // the A -> C connection hostC.g$o[0].channel.datarate = 1Gbps; // the C -> A connection submodules: hostA: Host; hostB: Host; hostC: Host; connections: hostA.g++ <--> ned.DatarateChannel <--> hostB.g++; hostA.g++ <--> ned.DatarateChannel <--> hostC.g++;
Also, with the ++ syntax it is not always easy to figure out which gate indices map to the connections one needs to configure. If connection objects could be given names to override the default name “channel”, that would make it easier to identify connections in patterns. This feature is described in the next section.
The default name given to channel objects is "channel". Since OMNeT++ 4.3 it is possible to specify the name explicitly, and also to override the default name per channel type. The purpose of custom channel names is to make addressing easier when channel parameters are assigned from ini files.
The syntax for naming a channel in a connection is similar to submodule syntax: name: type. Since both name and type are optional, the colon must be there after name even if type is missing, in order to remove the ambiguity.
Examples:
r1.pppg++ <--> eth1: EthernetChannel <--> r2.pppg++; a.out --> foo: {delay=1ms;} --> b.in; a.out --> bar: --> b.in;
In the absence of an explicit name, the channel name comes from the @defaultname property of the channel type if that exists.
channel Eth10G extends ned.DatarateChannel like IEth { @defaultname(eth10G); }
There's a catch with @defaultname though: if the channel type is specified with a **.channelname.liketype= line in an ini file, then the channel type's @defaultname cannot be used as channelname in that configuration line, because the channel type would only be known as a result of using that very configuration line. To illustrate the problem, consider the above Eth10G channel, and a compound module containing the following connection:
r1.pppg++ <--> <> like IEth <--> r2.pppg++;
Then consider the following inifile:
**.eth10G.typename = "Eth10G" # Won't match! The eth10G name would come from # the Eth10G type - catch-22! **.channel.typename = "Eth10G" # OK, as lookup assumes the name "channel" **.eth10G.datarate = 10.01Gbps # OK, channel already exists with name "eth10G"
The anomaly can be avoided by using an explicit channel name in the connection, not using @defaultname, or by specifying the type via a module parameter (e.g. writing <param> like ... instead of <> like ...).
Simple programming constructs (loop, conditional) allow creating multiple connections easily.
This will be shown in the following examples.
One can create a chain of modules like this:
module Chain parameters: int count; submodules: node[count] : Node { gates: port[2]; } connections allowunconnected: for i = 0..count-2 { node[i].port[1] <--> node[i+1].port[0]; } }
One can build a binary tree in the following way:
simple BinaryTreeNode { gates: inout left; inout right; inout parent; } module BinaryTree { parameters: int height; submodules: node[2^height-1]: BinaryTreeNode; connections allowunconnected: for i=0..2^(height-1)-2 { node[i].left <--> node[2*i+1].parent; node[i].right <--> node[2*i+2].parent; } }
Note that not every gate of the modules will be connected. By default, an unconnected gate produces a run-time error message when the simulation is started, but this error message is turned off here with the allowunconnected modifier. Consequently, it is the simple modules' responsibility not to send on an unconnected gate.
Conditional connections can be used to generate random topologies, for example. The following code generates a random subgraph of a full graph:
module RandomGraph { parameters: int count; double connectedness; // 0.0<x<1.0 submodules: node[count]: Node { gates: in[count]; out[count]; } connections allowunconnected: for i=0..count-1, for j=0..count-1 { node[i].out[j] --> node[j].in[i] if i!=j && uniform(0,1)<connectedness; } }
Note the use of the allowunconnected modifier here too, to turn off error messages produced by the network setup code for unconnected gates.
Several approaches can be used for creating complex topologies that have a regular structure; three of them are described below.
This pattern takes a subset of the connections of a full graph. A condition is used to “carve out” the necessary interconnection from the full graph:
for i=0..N-1, for j=0..N-1 { node[i].out[...] --> node[j].in[...] if condition(i,j); }
The RandomGraph compound module (presented earlier) is an example of this pattern, but the pattern can generate any graph where an appropriate condition(i,j) can be formulated. For example, when generating a tree structure, the condition would return whether node j is a child of node i or vice versa.
Though this pattern is very general, its usage can be prohibitive if the number of nodes N is high and the graph is sparse (it has much less than N2 connections). The following two patterns do not suffer from this drawback.
The pattern loops through all nodes and creates the necessary connections for each one. It can be generalized like this:
for i=0..Nnodes, for j=0..Nconns(i)-1 { node[i].out[j] --> node[rightNodeIndex(i,j)].in[j]; }
The Hypercube compound module (to be presented later) is a clear example of this approach. BinaryTree can also be regarded as an example of this pattern where the inner j loop is unrolled.
The applicability of this pattern depends on how easily the rightNodeIndex(i,j) function can be formulated.
A third pattern is to list all connections within a loop:
for i=0..Nconnections-1 { node[leftNodeIndex(i)].out[...] --> node[rightNodeIndex(i)].in[...]; }
This pattern can be used if leftNodeIndex(i) and rightNodeIndex(i) mapping functions can be sufficiently formulated.
The Chain module is an example of this approach where the mapping functions are extremely simple: leftNodeIndex(i)=i and rightNodeIndex(i) = i+1. The pattern can also be used to create a random subset of a full graph with a fixed number of connections.
In the case of irregular structures where none of the above patterns can be employed, one can resort to listing all connections, like one would do it in most existing simulators.
A submodule type may be specified with a module parameter of the type string, or in general, with any string-typed expression. The syntax uses the like keyword.
Let us begin with an example:
network Net6 { parameters: string nodeType; submodules: node[6]: <nodeType> like INode { address = index; } connections: ... }
It creates a submodule vector whose module type will come from the nodeType parameter. For example, if nodeType is set to "SensorNode", then the module vector will consist of sensor nodes, provided such module type exists and it qualifies. What this means is that the INode must be an existing module interface, which the SensorNode module type must implement (more about this later).
As already mentioned, one can write an expression between the angle brackets. The expression may use the parameters of the parent module and of previously defined submodules, and has to yield a string value. For example, the following code is also valid:
network Net6 { parameters: string nodeTypePrefix; int variant; submodules: node[6]: <nodeTypePrefix + "Node" + string(variant)> like INode { ... }
The corresponding NED declarations:
moduleinterface INode { parameters: int address; gates: inout port[]; } module SensorNode like INode { parameters: int address; ... gates: inout port[]; ... }
The “<nodeType> like INode” syntax has an issue when used with submodule vectors: does not allow one to specify different types for different indices. The following syntax is better suited for submodule vectors:
The expression between the angle brackets may be left out altogether, leaving a pair of empty angle brackets, <>:
module Node { submodules: nic: <> like INic; // type name expression left unspecified ... }
Now the submodule type name is expected to be defined via typename pattern assignments. Typename pattern assignments look like pattern assignments for the submodule's parameters, only the parameter name is replaced by the typename keyword. Typename pattern assignments may also be written in the configuration file. In a network that uses the above Node NED type, typename pattern assignments would look like this:
network Network { parameters: node[*].nic.typename = "Ieee80211g"; submodules: node: Node[100]; }
A default value may also be specified between the angle brackets; it will be used if there is no typename assignment for the module:
module Node { submodules: nic: <default("Ieee80211b")> like INic; ... }
There must be exactly one module type that goes by the simple name Ieee80211b and also implements the module interface INic, otherwise an error message will be issued. (The imports in Node's the NED file play no role in the type resolution.) If there are two or more such types, one can remove the ambiguity by specifying the fully qualified module type name, i.e. one that also includes the package name:
module Node { submodules: nic: <default("acme.wireless.Ieee80211b")> like INic; // made-up name ... }
When creating reusable compound modules, it is often useful to be able to make a parametric submodule also optional. One solution is to let the user define the submodule type with a string parameter, and not create the module when the parameter is set to the empty string. Like this:
module Node { parameters: string tcpType = default("Tcp"); submodules: tcp: <tcpType> like ITcp if tcpType!=""; }
However, this pattern, when used extensively, can lead to a large number of string parameters. Luckily, it is also possible to achieve the same effect with typename, without using extra parameters:
module Node { submodules: tcp: <default("Tcp")> like ITcp if typename!=""; }
The typename operator in a submodule's if condition evaluates to the would-be type of the submodule. By using the typename!="" condition, we can let the user eliminate the tcp submodule by setting its typename to the empty string. For example, in a network that uses the above NED type, typename pattern assignments could look like this:
network Network { parameters: node1.tcp.typename = "TcpExt"; // let node1 use a custom TCP node2.tcp.typename = ""; // no TCP in node2 submodules: node1: Node; node2: Node; }
Note that this trick does not work with submodule vectors. The reason is that the condition applies to the vector as a whole, while type is per-element.
It is often also useful to be able to check, e.g. in the connections section, whether a conditional submodule has been created or not. This can be done with the exists() operator. An example:
module Node { ... connections: ip.tcpOut --> tcp.ipIn if exists(ip) && exists(tcp); }
Limitation: exists() may only be used after the submodule's occurrence in the compound module.
Parametric connection types work similarly to parametric submodule types, and the syntax is similar as well. A basic example that uses a parameter of the parent module:
a.g++ <--> <channelType> like IMyChannel <--> b.g++; a.g++ <--> <channelType> like IMyChannel {@display("ls=red");} <--> b.g++;
The expression may use loop variables, parameters of the parent module and also parameters of submodules (e.g. host[2].channelType).
The type expression may also be absent, and then the type is expected to be specified using typename pattern assignments:
a.g++ <--> <> like IMyChannel <--> b.g++; a.g++ <--> <> like IMyChannel {@display("ls=red");} <--> b.g++;
A default value may also be given:
a.g++ <--> <default("Ethernet100")> like IMyChannel <--> b.g++; a.g++ <--> <default(channelType)> like IMyChannel <--> b.g++;
The corresponding type pattern assignments:
a.g$o[0].channel.typename = "Ethernet1000"; // A -> B channel b.g$o[0].channel.typename = "Ethernet1000"; // B -> A channel
NED properties are metadata annotations that can be added to modules, parameters, gates, connections, NED files, packages, and virtually anything in NED. @display, @class, @namespace, @mutable, @unit, @prompt, @loose, @directIn are all properties that have been mentioned in previous sections, but those examples only scratch the surface of what properties are used for.
Using properties, one can attach extra information to NED elements. Some properties are interpreted by NED, by the simulation kernel; other properties may be read and used from within the simulation model, or provide hints for NED editing tools.
Properties are attached to the type, so one cannot have different properties defined per-instance. All instances of modules, connections, parameters, etc. created from any particular location in the NED files have identical properties.
The following example shows the syntax for annotating various NED elements:
@namespace(foo); // file property module Example { parameters: @node; // module property @display("i=device/pc"); // module property int a @unit(s) = default(1); // parameter property gates: output out @loose @labels(pk); // gate properties submodules: src: Source { parameters: @display("p=150,100"); // submodule property count @prompt("Enter count:"); // adding a property to a parameter gates: out[] @loose; // adding a property to a gate } ... connections: src.out++ --> { @display("ls=green,2"); } --> sink1.in; // connection prop. src.out++ --> Channel { @display("ls=green,2"); } --> sink2.in; }
Sometimes it is useful to have multiple properties with the same name, for example for declaring multiple statistics produced by a simple module. Property indices make this possible.
A property index is an identifier or a number in square brackets after the property name, such as eed and jitter in the following example:
simple App { @statistic[eed](title="end-to-end delay of received packets";unit=s); @statistic[jitter](title="jitter of received packets"); }
This example declares two statistics as @statistic properties, @statistic[eed] and @statistic[jitter]. Property values within the parentheses are used to supply additional info, like a more descriptive name (title="..." or a unit (unit=s). Property indices can be conveniently accessed from the C++ API as well; for example it is possible to ask what indices exist for the "statistic" property, and it will return a list containing "eed" and "jitter").
In the @statistic example the index was textual and meaningful, but neither is actually required. The following dummy example shows the use of numeric indices which may be ignored altogether by the code that interprets the properties:
simple Dummy { @foo[1](what="apples";amount=2); @foo[2](what="oranges";amount=5); }
Note that without the index, the lines would actually define the same @foo property, and would overwrite each other's values.
Indices also make it possible to override entries via inheritance:
simple DummyExt extends Dummy { @foo[2](what="grapefruits"); // 5 grapefruits instead of 5 oranges }
Properties may contain data, given in parentheses; the data model is quite flexible. To begin with, properties may contain no value or a single value:
@node; @node(); // same as @node @class(FtpApp2);
Properties may contain lists:
@foo(Sneezy,Sleepy,Dopey,Doc,Happy,Bashful,Grumpy);
They may contain key-value pairs, separated by semicolons:
@foo(x=10.31; y=30.2; unit=km);
In key-value pairs, each value can be a (comma-separated) list:
@foo(coords=47.549,19.034;labels=vehicle,router,critical);
The above examples are special cases of the general data model. According to the data model, properties contain key-valuelist pairs, separated by semicolons. Items in valuelist are separated by commas. Wherever key is missing, values go on the valuelist of the default key, the empty string.
Value items may contain words, numbers, string constants and some other characters, but not arbitrary strings. Whenever the syntax does not permit some value, it should be enclosed in quotes. This quoting does not affect the value because the parser automatically drops one layer of quotes; thus, @class(TCP) and @class("TCP") are exactly the same. If the quotes themselves need to be part of the value, an extra layer of quotes and escaping are the solution: @foo("\"some string\"").
There are also some conventions. One can use properties to tag NED elements; for example, a @host property could be used to mark all module types that represent various hosts. This property could be recognized e.g. by editing tools, by topology discovery code inside the simulation model, etc.
The convention for such a “marker” property is that any extra data in it (i.e. within parens) is ignored, except a single word false, which has the special meaning of “turning off” the property. Thus, any simulation model or tool that interprets properties should handle all the following forms as equivalent to @host: @host(), @host(true), @host(anything-but-false), @host(a=1;b=2); and @host(false) should be interpreted as the lack of the @host tag.
Properties defined on a module or channel type may be updated both by subclassing and when using type as a submodule or connection channel. One can add new properties, and also modify existing ones.
When modifying a property, the new property is merged with the old one. The rules of merging are fairly simple. New keys simply get added. If a key already exists in the old property, items in its valuelist overwrite items on the same position in the old property. A single hyphen ($-$) as valuelist item serves as “antivalue”, it removes the item at the corresponding position.
Some examples:
base | @prop |
new | @prop(a) |
result | @prop(a) |
base | @prop(a,b,c) |
new | @prop(,-) |
result | @prop(a,,c) |
base | @prop(foo=a,b) |
new | @prop(foo=A,,c;bar=1,2) |
result | @prop(foo=A,b,c;bar=1,2) |
Inheritance support in the NED language is only described briefly here, because several details and examples have been already presented in previous sections.
In NED, a type may only extend (extends keyword) an element of the same component type: a simple module may extend a simple module, a channel may extend a channel, a module interface may extend a module interface, and so on. There is one irregularity, however: A compound module may extend a simple module (and inherits its C++ class), but not vica versa.
Single inheritance is supported for modules and channels, and multiple inheritance is supported for module interfaces and channel interfaces. A network is a shorthand for a compound module with the @isNetwork property set, so the same rules apply to it as to compound modules.
However, a simple or compound module type may implement (like keyword) several module interfaces; likewise, a channel type may implement several channel interfaces.
Inheritance may:
For details and examples, see the corresponding sections of this chapter (simple modules [3.3], compound modules [3.4], channels [3.5], parameters [3.6], gates [3.7], submodules [3.8], connections [3.9], module interfaces and channel interfaces [3.11.1]).
Having all NED files in a single directory is fine for small simulation projects. When a project grows, however, it sooner or later becomes necessary to introduce a directory structure, and sort the NED files into them. NED natively supports directory trees with NED files, and calls directories packages. Packages are also useful for reducing name conflicts, because names can be qualified with the package name.
When a simulation is run, one must tell the simulation kernel the directory which is the root of the package tree; let's call it NED source folder. The simulation kernel will traverse the whole directory tree, and load all NED files from every directory. One can have several NED directory trees, and their roots (the NED source folders) should be given to the simulation kernel in the NED path variable. The NED path can be specified in several ways: as an environment variable (NEDPATH), as a configuration option (ned-path), or as a command-line option to the simulation runtime (-n). NEDPATH is described in detail in chapter [11].
Directories in a NED source tree correspond to packages. If NED files are in the <root>/a/b/c directory (where <root> is listed in NED path), then the package name is a.b.c. The package name has to be explicitly declared at the top of the NED files as well, like this:
package a.b.c;
The package name that follows from the directory name and the declared package must match; it is an error if they don't. (The only exception is the root package.ned file, as described below.)
By convention, package names are all lowercase, and begin with either the project name (myproject), or the reversed domain name plus the project name (org.example.myproject). The latter convention would cause the directory tree to begin with a few levels of empty directories, but this can be eliminated with a toplevel package.ned.
NED files called package.ned have a special role, as they are meant to represent the whole package. For example, comments in package.ned are treated as documentation of the package. Also, a @namespace property in a package.ned file affects all NED files in that directory and all directories below.
The toplevel package.ned file can be used to designate the root package, which is useful for eliminating a few levels of empty directories resulting from the package naming convention. For example, given a project where all NED types are under the org.acme.foosim package, one can eliminate the empty directory levels org, acme and foosim by creating a package.ned file in the source root directory with the package declaration org.example.myproject. This will cause a directory foo under the root to be interpreted as package org.example.myproject.foo, and NED files in them must contain that as package declaration. Only the root package.ned can define the package, package.ned files in subdirectories must follow it.
Let's look at the INET Framework as example, which contains hundreds of NED files in several dozen packages. The directory structure looks like this:
INET/ src/ base/ transport/ tcp/ udp/ ... networklayer/ linklayer/ ... examples/ adhoc/ ethernet/ ...
The src and examples subdirectories are denoted as NED source folders, so NEDPATH is the following (provided INET was unpacked in /home/joe):
/home/joe/INET/src;/home/joe/INET/examples
Both src and examples contain package.ned files to define the root package:
// INET/src/package.ned: package inet;
// INET/examples/package.ned: package inet.examples;
And other NED files follow the package defined in package.ned:
// INET/src/transport/tcp/TCP.ned: package inet.transport.tcp;
We already mentioned that packages can be used to distinguish similarly named NED types. The name that includes the package name (a.b.c.Queue for a Queue module in the a.b.c package) is called fully qualified name; without the package name (Queue) it is called simple name.
Simple names alone are not enough to unambiguously identify a type. Here is how one can refer to an existing type:
Types can be imported with the import keyword by either fully qualified name, or by a wildcard pattern. In wildcard patterns, one asterisk ("*") stands for "any character sequence not containing period", and two asterisks ("**") mean "any character sequence which may contain period".
So, any of the following lines can be used to import a type called inet.protocols.networklayer.ip.RoutingTable:
import inet.protocols.networklayer.ip.RoutingTable; import inet.protocols.networklayer.ip.*; import inet.protocols.networklayer.ip.Ro*Ta*; import inet.protocols.*.ip.*; import inet.**.RoutingTable;
If an import explicitly names a type with its exact fully qualified name, then that type must exist, otherwise it is an error. Imports containing wildcards are more permissive, it is allowed for them not to match any existing NED type (although that might generate a warning.)
Inner types may not be referred to outside their enclosing types, so they cannot be imported either.
The situation is a little different for submodule and connection channel specifications using the like keyword, when the type name comes from a string-valued expression (see section [3.11.1] about submodule and channel types as parameters). Imports are not much use here: at the time of writing the NED file it is not yet known what NED types will be suitable for being "plugged in" there, so they cannot be imported in advance.
There is no problem with fully qualified names, but simple names need to be resolved differently. What NED does is this: it determines which interface the module or channel type must implement (i.e. ... like INode), and then collects the types that have the given simple name AND implement the given interface. There must be exactly one such type, which is then used. If there is none or there are more than one, it will be reported as an error.
Let us see the following example:
module MobileHost { parameters: string mobilityType; submodules: mobility: <mobilityType> like IMobility; ... }
and suppose that the following modules implement the IMobility module interface: inet.mobility.RandomWalk, inet.adhoc.RandomWalk, inet.mobility.MassMobility. Also suppose that there is a type called inet.examples.adhoc.MassMobility but it does not implement the interface.
So if mobilityType="MassMobility", then inet.mobility.MassMobility will be selected; the other MassMobility doesn't interfere. However, if mobilityType="RandomWalk", then it is an error because there are two matching RandomWalk types. Both RandomWalk's can still be used, but one must explicitly choose one of them by providing a package name: mobilityType="inet.adhoc.RandomWalk".
It is not mandatory to make use of packages: if all NED files are in a single directory listed on the NEDPATH, then package declarations (and imports) can be omitted. Those files are said to be in the default package.
Simple modules are the active components in the model. Simple modules are programmed in C++, using the OMNeT++ class library. The following sections contain a short introduction to discrete event simulation in general, explain how its concepts are implemented in OMNeT++, and give an overview and practical advice on how to design and code simple modules.
This section contains a very brief introduction into how discrete event simulation (DES) works, in order to introduce terms we'll use when explaining OMNeT++ concepts and implementation.
A discrete event system is a system where state changes (events) happen at discrete instances in time, and events take zero time to happen. It is assumed that nothing (i.e. nothing interesting) happens between two consecutive events, that is, no state change takes place in the system between the events. This is in contrast to continuous systems where state changes are continuous. Systems that can be viewed as discrete event systems can be modeled using discrete event simulation, DES.
For example, computer networks are usually viewed as discrete event systems. Some of the events are:
This implies that between two events such as start of a packet transmission and end of a packet transmission, nothing interesting happens. That is, the packet's state remains being transmitted. Note that the definition of “interesting” events and states always depends on the intent and purposes of the modeler. If we were interested in the transmission of individual bits, we would have included something like start of bit transmission and end of bit transmission among our events.
The time when events occur is often called event timestamp; with OMNeT++ we use the term arrival time (because in the class library, the word “timestamp” is reserved for a user-settable attribute in the event class). Time within the model is often called simulation time, model time or virtual time as opposed to real time or CPU time which refer to how long the simulation program has been running and how much CPU time it has consumed.
Discrete event simulation maintains the set of future events in a data structure often called FES (Future Event Set) or FEL (Future Event List). Such simulators usually work according to the following pseudocode:
initialize -- this includes building the model and inserting initial events to FES while (FES not empty and simulation not yet complete) { retrieve first event from FES t:= timestamp of this event process event (processing may insert new events in FES or delete existing ones) } finish simulation (write statistical results, etc.)
The initialization step usually builds the data structures representing the simulation model, calls any user-defined initialization code, and inserts initial events into the FES to ensure that the simulation can start. Initialization strategies can differ considerably from one simulator to another.
The subsequent loop consumes events from the FES and processes them. Events are processed in strict timestamp order to maintain causality, that is, to ensure that no current event may have an effect on earlier events.
Processing an event involves calls to user-supplied code. For example, using the computer network simulation example, processing a “timeout expired” event may consist of re-sending a copy of the network packet, updating the retry count, scheduling another “timeout” event, and so on. The user code may also remove events from the FES, for example when canceling timeouts.
The simulation stops when there are no events left (this rarely happens in practice), or when it isn't necessary for the simulation to run further because the model time or the CPU time has reached a given limit, or because the statistics have reached the desired accuracy. At this time, before the program exits, the user will typically want to record statistics into output files.
OMNeT++ uses messages to represent
events.
Events are consumed from the FES in arrival time order, to maintain causality. More precisely, given two messages, the following rules apply:
Scheduling priority is a user-assigned integer attribute of messages.
The current simulation time can be obtained with the simTime() function.
Simulation time in OMNeT++ is represented by the C++ type simtime_t, which is by default a typedef to the SimTime class. SimTime class stores simulation time in a 64-bit integer, using decimal fixed-point representation. The resolution is controlled by the scale exponent global configuration variable; that is, SimTime instances have the same resolution. The exponent can be chosen between -18 (attosecond resolution) and 0 (seconds). Some exponents with the ranges they provide are shown in the following table.
Exponent | Resolution | Approx. Range |
-18 | 10-18s (1as) | +/- 9.22s |
-15 | 10-15s (1fs) | +/- 153.72 minutes |
-12 | 10-12s (1ps) | +/- 106.75 days |
-9 | 10-9s (1ns) | +/- 292.27 years |
-6 | 10-6s (1us) | +/- 292271 years |
-3 | 10-3s (1ms) | +/- 2.9227e8 years |
0 | 1s | +/- 2.9227e11 years |
Note that although simulation time cannot be negative, it is still useful to be able to represent negative numbers, because they often arise during the evaluation of arithmetic expressions.
There is no implicit conversion from SimTime to double, mostly because it would conflict with overloaded arithmetic operations of SimTime; use the dbl() method of SimTime or the SIMTIME_DBL() macro to convert. To reduce the need for dbl(), several functions and methods have overloaded variants that directly accept SimTime, for example fabs(), fmod(), div(), ceil(), floor(), uniform(), exponential(), and normal().
Other useful methods of SimTime include str(), which returns the value as a string; parse(), which converts a string to SimTime; raw(), which returns the underlying 64-bit integer; getScaleExp(), which returns the global scale exponent; isZero(), which tests whether the simulation time is 0; and getMaxTime(), which returns the maximum simulation time that can be represented at the current scale exponent. Zero and the maximum simulation time are also accessible via the SIMTIME_ZERO and SIMTIME_MAX macros.
// 340 microseconds in the future, truncated to millisecond boundary simtime_t timeout = (simTime() + SimTime(340, SIMTIME_US)).trunc(SIMTIME_MS);
The implementation of the FES is a crucial factor in the performance of a discrete event simulator. In OMNeT++, the FES is replaceable, and the default FES implementation uses binary heap as data structure. Binary heap is generally considered to be the best FES algorithm for discrete event simulation, as it provides a good, balanced performance for most workloads. (Exotic data structures like skiplist may perform better than heap in some cases.)
OMNeT++ simulation models are composed of modules and connections. Modules may be simple (atomic) modules or compound modules; simple modules are the active components in a model, and their behaviour is defined by the user as C++ code. Connections may have associated channel objects. Channel objects encapsulate channel behavior: propagation and transmission time modeling, error modeling, and possibly others. Channels are also programmable in C++ by the user.
Modules and channels are represented with the cModule and cChannel classes, respectively. cModule and cChannel are both derived from the cComponent class.
The user defines simple module types by subclassing cSimpleModule. Compound modules are instantiated with cModule, although the user can override it with @class in the NED file, and can even use a simple module C++ class (i.e. one derived from cSimpleModule) for a compound module.
The cChannel's subclasses include the three built-in channel types: cIdealChannel, cDelayChannel and cDatarateChannel. The user can create new channel types by subclassing cChannel or any other channel class.
The following inheritance diagram illustrates the relationship of the classes mentioned above.
Simple modules and channels can be programmed by redefining certain member functions, and providing your own code in them. Some of those member functions are declared on cComponent, the common base class of channels and modules.
cComponent has the following member functions meant for redefining in subclasses:
initialize() and finish(), together with initialize()'s variants for multi-stage initialization, will be covered in detail in section [4.3.3].
In OMNeT++, events occur inside simple modules. Simple modules encapsulate C++ code that generates events and reacts to events, implementing the behaviour of the module.
To define the dynamic behavior of a simple module, one of the following member functions need to be overridden:
Modules written with activity() and handleMessage() can be freely mixed within a simulation model. Generally, handleMessage() should be preferred to activity(), due to scalability and other practical reasons. The two functions will be described in detail in sections [4.4.1] and [4.4.2], including their advantages and disadvantages.
The behavior of channels can also be modified by redefining member functions. However, the channel API is slightly more complicated than that of simple modules, so we'll describe it in a later section ([4.8]).
Last, let us mention refreshDisplay(), which is related to updating the visual appearance of the simulation when run under a graphical user interface. refreshDisplay() is covered in the chapter that deals with simulation visualization ([8.2]).
As mentioned before, a simple module is nothing more than a C++ class which has to be subclassed from cSimpleModule, with one or more virtual member functions redefined to define its behavior.
The class has to be registered with OMNeT++ via the Define_Module() macro. The Define_Module() line should always be put into .cc or .cpp files and not header file (.h), because the compiler generates code from it.
The following HelloModule is about the simplest simple module one could write. (We could have left out the initialize() method as well to make it even smaller, but how would it say Hello then?) Note cSimpleModule as base class, and the Define_Module() line.
// file: HelloModule.cc #include <omnetpp.h> using namespace omnetpp; class HelloModule : public cSimpleModule { protected: virtual void initialize(); virtual void handleMessage(cMessage *msg); }; // register module class with OMNeT++ Define_Module(HelloModule); void HelloModule::initialize() { EV << "Hello World!\n"; } void HelloModule::handleMessage(cMessage *msg) { delete msg; // just discard everything we receive }
In order to be able to refer to this simple module type in NED files, we also need an associated NED declaration which might look like this:
// file: HelloModule.ned simple HelloModule { gates: input in; }
Simple modules are never instantiated by the user directly, but rather by the simulation kernel. This implies that one cannot write arbitrary constructors: the signature must be what is expected by the simulation kernel. Luckily, this contract is very simple: the constructor must be public, and must take no arguments:
public: HelloModule(); // constructor takes no arguments
cSimpleModule itself has two constructors:
The first version should be used with handleMessage() simple modules, and the second one with activity() modules. (With the latter, the activity() method of the module class runs as a coroutine which needs a separate CPU stack, usually of 16..32K. This will be discussed in detail later.) Passing zero stack size to the latter constructor also selects handleMessage().
Thus, the following constructor definitions are all OK, and select handleMessage() to be used with the module:
HelloModule::HelloModule() {...} HelloModule::HelloModule() : cSimpleModule() {...}
It is also OK to omit the constructor altogether, because the compiler-generated one is suitable too.
The following constructor definition selects activity() to be used with the module, with 16K of coroutine stack:
HelloModule::HelloModule() : cSimpleModule(16384) {...}
The initialize() and finish() methods are declared as part of cComponent, and provide the user the opportunity of running code at the beginning and at successful termination of the simulation.
The reason initialize() exists is that usually you cannot put simulation-related code into the simple module constructor, because the simulation model is still being setup when the constructor runs, and many required objects are not yet available. In contrast, initialize() is called just before the simulation starts executing, when everything else has been set up already.
finish() is for recording statistics, and it only gets called when the simulation has terminated normally. It does not get called when the simulations stops with an error message. The destructor always gets called at the end, no matter how the simulation stopped, but at that time it is fair to assume that the simulation model has been halfway demolished already.
Based on the above considerations, the following usage conventions exist for these four methods:
Set pointer members of the module class to nullptr; postpone all other initialization tasks to initialize().
Perform all initialization tasks: read module parameters, initialize class variables, allocate dynamic data structures with new; also allocate and initialize self-messages (timers) if needed.
Record statistics. Do not delete anything or cancel timers -- all cleanup must be done in the destructor.
Delete everything which was allocated by new and is still held by the module class. With self-messages (timers), use the cancelAndDelete(msg) function! It is almost always wrong to just delete a self-message from the destructor, because it might be in the scheduled events list. The cancelAndDelete(msg) function checks for that first, and cancels the message before deletion if necessary.
OMNeT++ prints the list of unreleased objects at the end of the simulation. When a simulation model dumps "undisposed object ..." messages, this indicates that the corresponding module destructors should be fixed. As a temporary measure, these messages may be hidden by setting print-undisposed=false in the configuration.
The initialize() functions of the modules are invoked before the first event is processed, but after the initial events (starter messages) have been placed into the FES by the simulation kernel.
Both simple and compound modules have initialize() functions. A compound module's initialize() function runs before that of its submodules.
The finish() functions are called when the event loop has terminated, and only if it terminated normally.
The calling order for finish() is the reverse of the order of
initialize(): first submodules, then the encompassing compound module.
This is summarized in the following pseudocode:
perform simulation run: build network (i.e. the system module and its submodules recursively) insert starter messages for all submodules using activity() do callInitialize() on system module enter event loop // (described earlier) if (event loop terminated normally) // i.e. no errors do callFinish() on system module clean up callInitialize() { call to user-defined initialize() function if (module is compound) for (each submodule) do callInitialize() on submodule } callFinish() { if (module is compound) for (each submodule) do callFinish() on submodule call to user-defined finish() function }
Keep in mind that finish() is not always called, so it isn't a good place for cleanup code which should run every time the module is deleted. finish() is only a good place for writing statistics, result post-processing and other operations which are supposed to run only on successful completion. Cleanup code should go into the destructor.
In simulation models where one-stage initialization provided by initialize() is not sufficient, one can use multi-stage initialization. Modules have two functions which can be redefined by the user:
virtual void initialize(int stage); virtual int numInitStages() const;
At the beginning of the simulation, initialize(0)
is called for all modules, then initialize(1),
initialize(2), etc. You can think of it like
initialization takes place in several “waves”. For each module,
numInitStages() must be redefined to return the number of init
stages required, e.g. for a two-stage init, numInitStages()
should return 2, and initialize(int stage) must be implemented to
handle the stage=0 and stage=1 cases.
The callInitialize() function performs the full multi-stage initialization for that module and all its submodules.
If you do not redefine the multi-stage initialization functions, the default behavior is single-stage initialization: the default numInitStages() returns 1, and the default initialize(int stage) simply calls initialize().
The task of finish() is implemented in several other simulators by introducing a special end-of-simulation event. This is not a very good practice because the simulation programmer has to code the models (often represented as FSMs) so that they can always properly respond to end-of-simulation events, in whichever state they are. This often makes program code unnecessarily complicated. For this reason OMNeT++ does not use the end of simulation event.
This can also be witnessed in the design of the PARSEC simulation language (UCLA). Its predecessor Maisie used end-of-simulation events, but -- as documented in the PARSEC manual -- this has led to awkward programming in many cases, so for PARSEC end-of-simulation events were dropped in favour of finish() (called finalize() in PARSEC).
This section discusses cSimpleModule's previously mentioned handleMessage() and activity() member functions, intended to be redefined by the user.
The idea is that at each event (message arrival) we simply call a user-defined function. This function, handleMessage(cMessage *msg) is a virtual member function of cSimpleModule which does nothing by default -- the user has to redefine it in subclasses and add the message processing code.
The handleMessage() function will be called for every message that arrives at the module. The function should process the message and return immediately after that. The simulation time is potentially different in each call. No simulation time elapses within a call to handleMessage().
The event loop inside the simulator handles both activity() and handleMessage() simple modules, and it corresponds to the following pseudocode:
while (FES not empty and simulation not yet complete) { retrieve first event from FES t:= timestamp of this event m:= module containing this event if (m works with handleMessage()) m->handleMessage( event ) else // m works with activity() transferTo( m ) }
Modules with handleMessage() are NOT started automatically: the simulation kernel creates starter messages only for modules with activity(). This means that you have to schedule self-messages from the initialize() function if you want a handleMessage() simple module to start working “by itself”, without first receiving a message from other modules.
To use the handleMessage() mechanism in a simple module, you must specify zero stack size for the module. This is important, because this tells OMNeT++ that you want to use handleMessage() and not activity().
Message/event related functions you can use in handleMessage():
The receive() and wait() functions cannot be used in handleMessage(), because they are coroutine-based by nature, as explained in the section about activity().
You have to add data members to the module class for every piece of information you want to preserve. This information cannot be stored in local variables of handleMessage() because they are destroyed when the function returns. Also, they cannot be stored in static variables in the function (or the class), because they would be shared between all instances of the class.
Data members to be added to the module class will typically include things like:
These variables are often initialized from the initialize() method, because the information needed to obtain the initial value (e.g. module parameters) may not yet be available at the time the module constructor runs.
Another task to be done in initialize() is to schedule initial event(s) which trigger the first call(s) to handleMessage(). After the first call, handleMessage() must take care to schedule further events for itself so that the “chain” is not broken. Scheduling events is not necessary if your module only has to react to messages coming from other modules.
finish() is normally used to record statistics information accumulated in data members of the class at the end of the simulation.
handleMessage() is in most cases a better choice than activity():
Models of protocol layers in a communication network tend to have a common structure on a high level because fundamentally they all have to react to three types of events: to messages arriving from higher layer protocols (or apps), to messages arriving from lower layer protocols (from the network), and to various timers and timeouts (that is, self-messages).
This usually results in the following source code pattern:
class FooProtocol : public cSimpleModule { protected: // state variables // ... virtual void processMsgFromHigherLayer(cMessage *packet); virtual void processMsgFromLowerLayer(FooPacket *packet); virtual void processTimer(cMessage *timer); virtual void initialize(); virtual void handleMessage(cMessage *msg); }; // ... void FooProtocol::handleMessage(cMessage *msg) { if (msg->isSelfMessage()) processTimer(msg); else if (msg->arrivedOn("fromNetw")) processMsgFromLowerLayer(check_and_cast<FooPacket *>(msg)); else processMsgFromHigherLayer(msg); }
The functions processMsgFromHigherLayer(), processMsgFromLowerLayer() and processTimer() are then usually split further: there are separate methods to process separate packet types and separate timers.
The code for simple packet generators and sinks programmed with handleMessage() might be as simple as the following pseudocode:
PacketGenerator::handleMessage(msg) { create and send out a new packet; schedule msg again to trigger next call to handleMessage; } PacketSink::handleMessage(msg) { delete msg; }
Note that PacketGenerator will need to redefine initialize() to create m and schedule the first event.
The following simple module generates packets with exponential inter-arrival time. (Some details in the source haven't been discussed yet, but the code is probably understandable nevertheless.)
class Generator : public cSimpleModule { public: Generator() : cSimpleModule() {} protected: virtual void initialize(); virtual void handleMessage(cMessage *msg); }; Define_Module(Generator); void Generator::initialize() { // schedule first sending scheduleAt(simTime(), new cMessage); } void Generator::handleMessage(cMessage *msg) { // generate & send packet cMessage *pkt = new cMessage; send(pkt, "out"); // schedule next call scheduleAt(simTime()+exponential(1.0), msg); }
A bit more realistic example is to rewrite our Generator to create packet bursts, each consisting of burstLength packets.
We add some data members to the class:
The code:
class BurstyGenerator : public cSimpleModule { protected: int burstLength; int burstCounter; virtual void initialize(); virtual void handleMessage(cMessage *msg); }; Define_Module(BurstyGenerator); void BurstyGenerator::initialize() { // init parameters and state variables burstLength = par("burstLength"); burstCounter = burstLength; // schedule first packet of first burst scheduleAt(simTime(), new cMessage); } void BurstyGenerator::handleMessage(cMessage *msg) { // generate & send packet cMessage *pkt = new cMessage; send(pkt, "out"); // if this was the last packet of the burst if (--burstCounter == 0) { // schedule next burst burstCounter = burstLength; scheduleAt(simTime()+exponential(5.0), msg); } else { // schedule next sending within burst scheduleAt(simTime()+exponential(1.0), msg); } }
Pros:
Cons:
Usually, handleMessage() should be preferred over activity().
Many simulation packages use a similar approach, often topped with something like a state machine (FSM) which hides the underlying function calls. Such systems are:
OMNeT++'s FSM support is described in the next section.
With activity(), a simple module can be coded much like an operating system process or thread. One can wait for an incoming message (event) at any point of the code, suspend the execution for some time (model time!), etc. When the activity() function exits, the module is terminated. (The simulation can continue if there are other modules which can run.)
The most important functions that can be used in activity() are (they will be discussed in detail later):
The activity() function normally contains an infinite loop, with at least a wait() or receive() call in its body.
Generally you should prefer handleMessage() to activity(). The main problem with activity() is that it doesn't scale because every module needs a separate coroutine stack. It has also been observed that activity() does not encourage a good programming style, and stack switching also confuses many debuggers.
There is one scenario where activity()'s process-style description is convenient: when the process has many states but transitions are very limited, i.e. from any state the process can only go to one or two other states. For example, this is the case when programming a network application, which uses a single network connection. The pseudocode of the application which talks to a transport layer protocol might look like this:
activity() { while(true) { open connection by sending OPEN command to transport layer receive reply from transport layer if (open not successful) { wait(some time) continue // loop back to while() } while (there is more to do) { send data on network connection if (connection broken) { continue outer loop // loop back to outer while() } wait(some time) receive data on network connection if (connection broken) { continue outer loop // loop back to outer while() } wait(some time) } close connection by sending CLOSE command to transport layer if (close not successful) { // handle error } wait(some time) } }
If there is a need to handle several connections concurrently, dynamically creating simple modules to handle each is an option. Dynamic module creation will be discussed later.
There are situations when you certainly do not want to use activity(). If the activity() function contains no wait() and it has only one receive() at the top of a message handling loop, there is no point in using activity(), and the code should be written with handleMessage(). The body of the loop would then become the body of handleMessage(), state variables inside activity() would become data members in the module class, and they would be initialized in initialize().
Example:
void Sink::activity() { while(true) { msg = receive(); delete msg; } }
should rather be programmed as:
void Sink::handleMessage(cMessage *msg) { delete msg; }
activity() is run in a coroutine. Coroutines are similar to threads, but are scheduled non-preemptively (this is also called cooperative multitasking). One can switch from one coroutine to another coroutine by a transferTo(otherCoroutine) call, causing the first coroutine to be suspended and second one to run. Later, when the second coroutine performs a transferTo(firstCoroutine) call to the first one, the execution of the first coroutine will resume from the point of the transferTo(otherCoroutine) call. The full state of the coroutine, including local variables are preserved while the thread of execution is in other coroutines. This implies that each coroutine has its own CPU stack, and transferTo() involves a switch from one CPU stack to another.
Coroutines are at the heart of OMNeT++, and the simulation programmer doesn't ever need to call transferTo() or other functions in the coroutine library, nor does he need to care about the coroutine library implementation. It is important to understand, however, how the event loop found in discrete event simulators works with coroutines.
When using coroutines, the event loop looks like this (simplified):
while (FES not empty and simulation not yet complete) { retrieve first event from FES t:= timestamp of this event transferTo(module containing the event) }
That is, when a module has an event, the simulation kernel transfers the control to the module's coroutine. It is expected that when the module “decides it has finished the processing of the event”, it will transfer the control back to the simulation kernel by a transferTo(main) call. Initially, simple modules using activity() are “booted” by events (''starter messages'') inserted into the FES by the simulation kernel before the start of the simulation.
How does the coroutine know it has “finished processing the event”? The answer: when it requests another event. The functions which request events from the simulation kernel are the receive() and wait(), so their implementations contain a transferTo(main) call somewhere.
Their pseudocode, as implemented in OMNeT++:
receive() { transferTo(main) retrieve current event return the event // remember: events = messages } wait() { create event e schedule it at (current sim. time + wait interval) transferTo(main) retrieve current event if (current event is not e) { error } delete e // note: actual impl. reuses events return }
Thus, the receive() and wait() calls are special points in the activity() function, because they are where
Modules written with activity() need starter messages to “boot”. These starter messages are inserted into the FES automatically by OMNeT++ at the beginning of the simulation, even before the initialize() functions are called.
The simulation programmer needs to define the CPU stack size for coroutines. This cannot be automated.
16 or 32 kbytes is usually a good choice, but more space may be needed if the module uses recursive functions or has many/large local variables. OMNeT++ has a built-in mechanism that will usually detect if the module stack is too small and overflows. OMNeT++ can also report how much stack space a module actually uses at runtime.
Because local variables of activity() are preserved across events, you can store everything (state information, packet buffers, etc.) in them. Local variables can be initialized at the top of the activity() function, so there isn't much need to use initialize().
You do need finish(), however, if you want to write statistics at the end of the simulation. Because finish() cannot access the local variables of activity(), you have to put the variables and objects containing the statistics into the module class. You still don't need initialize() because class members can also be initialized at the top of activity().
Thus, a typical setup looks like this in pseudocode:
class MySimpleModule... { ... variables for statistics collection activity(); finish(); }; MySimpleModule::activity() { declare local vars and initialize them initialize statistics collection variables while(true) { ... } } MySimpleModule::finish() { record statistics into file }
Pros:
Cons:
In most cases, cons outweigh pros and it is a better idea to use handleMessage() instead.
Coroutines are used by a number of other simulation packages:
If possible, avoid using global variables, including static class members. They are prone to cause several problems. First, they are not reset to their initial values (to zero) when you rebuild the simulation in Qtenv, or start another run in Cmdenv. This may produce surprising results. Second, they prevent you from parallelizing the simulation. When using parallel simulation, each partition of the model runs in a separate process, having their own copies of global variables. This is usually not what you want.
The solution is to encapsulate the variables into simple modules as private or protected data members, and expose them via public methods. Other modules can then call these public methods to get or set the values. Calling methods of other modules will be discussed in section [4.12]. Examples of such modules are InterfaceTable and RoutingTable in INET Framework.
The code of simple modules can be reused via subclassing, and redefining virtual member functions. An example:
class TransportProtocolExt : public TransportProtocol { protected: virtual void recalculateTimeout(); }; Define_Module(TransportProtocolExt); void TransportProtocolExt::recalculateTimeout() { //... }
The corresponding NED declaration:
simple TransportProtocolExt extends TransportProtocol { @class(TransportProtocolExt); // Important! }
Module parameters declared in NED files are represented with the cPar class at runtime, and be accessed by calling the par() member function of cComponent:
cPar& delayPar = par("delay");
cPar's value can be read with methods that correspond to the parameter's NED type: boolValue(), intValue(), doubleValue(), stringValue(), stdstringValue(), xmlValue(). There are also overloaded type cast operators for the corresponding types (bool; integer types including int, long, etc; double; const char *; cXMLElement *).
long numJobs = par("numJobs").intValue(); double processingDelay = par("processingDelay"); // using operator double()
Note that cPar has two methods for returning a string value: stringValue(), which returns const char *, and stdstringValue(), which returns std::string. For volatile parameters, only stdstringValue() may be used, but otherwise the two are interchangeable.
If you use the par("foo") parameter in expressions (such as 4*par("foo")+2), the C++ compiler may be unable to decide between overloaded operators and report ambiguity. This issue can be resolved by adding an explicit cast such as (double)par("foo"), or using the doubleValue() or intValue() methods.
A parameter can be declared volatile in the NED file. The volatile modifier indicates that a parameter is re-read every time a value is needed during simulation. Volatile parameters typically are used for things like random packet generation interval, and are assigned values like exponential(1.0) (numbers drawn from the exponential distribution with mean 1.0).
In contrast, non-volatile NED parameters are constants, and reading their values multiple times is guaranteed to yield the same value. When a non-volatile parameter is assigned a random value like exponential(1.0), it is evaluated once at the beginning of the simulation and replaced with the result, so all reads will get same (randomly generated) value.
The typical usage for non-volatile parameters is to read them in the initialize() method of the module class, and store the values in class variables for easy access later:
class Source : public cSimpleModule { protected: long numJobs; virtual void initialize(); ... }; void Source::initialize() { numJobs = par("numJobs"); ... }
volatile parameters need to be re-read every time the value is needed. For example, a parameter that represents a random packet generation interval may be used like this:
void Source::handleMessage(cMessage *msg) { ... scheduleAt(simTime() + par("interval").doubleValue(), timerMsg); ... }
This code looks up the the parameter by name every time. This lookup can be avoided by storing the parameter object's pointer in a class variable, resulting in the following code:
class Source : public cSimpleModule { protected: cPar *intervalp; virtual void initialize(); virtual void handleMessage(cMessage *msg); ... }; void Source::initialize() { intervalp = &par("interval"); ... } void Source::handleMessage(cMessage *msg) { ... scheduleAt(simTime() + intervalp->doubleValue(), timerMsg); ... }
Parameter values can be changed from the program, during execution. This is rarely needed, but may be useful for some scenarios.
The methods to set the parameter value are setBoolValue(), setLongValue(), setStringValue(), setDoubleValue(), setXMLValue(). There are also overloaded assignment operators for various types including bool, int, long, double, const char *, and cXMLElement *.
To allow a module to be notified about parameter changes, override its handleParameterChange() method, see [4.5.5].
The parameter's name and type are returned by the getName() and getType() methods. The latter returns a value from an enum, which can be converted to a readable string with the getTypeName() static method. The enum values are BOOL, DOUBLE, LONG, STRING and XML; and since the enum is an inner type, they usually have to be qualified with cPar::.
isVolatile() returns whether the parameter was declared volatile in the NED file. isNumeric() returns true if the parameter type is double or long.
The str() method returns the parameter's value in a string form. If the parameter contains an expression, then the string representation of the expression is returned.
An example usage of the above methods:
int n = getNumParams(); for (int i = 0; i < n; i++) { cPar& p = par(i); EV << "parameter: " << p.getName() << "\n"; EV << " type:" << cPar::getTypeName(p.getType()) << "\n"; EV << " contains:" << p.str() << "\n"; }
The NED properties of a parameter can be accessed with the getProperties() method that returns a pointer to the cProperties object that stores the properties of this parameter. Specifically, getUnit() returns the unit of measurement associated with the parameter (@unit property in NED).
Further cPar methods and related classes like cExpression and cDynamicExpression are used by the NED infrastructure to set up and assign parameters. They are documented in the API Reference, but they are normally of little interest to users.
As of version 4.2, OMNeT++ does not support parameter arrays, but in practice they can be emulated using string parameters. One can assign the parameter a string which contains all values in a textual form (for example, "0 1.234 3.95 5.467"), then parse this string in the simple module.
The cStringTokenizer class can be quite useful for this purpose. The constructor accepts a string, which it regards as a sequence of tokens (words) separated by delimiter characters (by default, spaces). Then you can either enumerate the tokens and process them one by one (hasMoreTokens(), nextToken()), or use one of the cStringTokenizer convenience methods to convert them into a vector of strings (asVector()), integers (asIntVector()), or doubles (asDoubleVector()).
The latter methods can be used like this:
const char *vstr = par("v").stringValue(); // e.g. "aa bb cc"; std::vector<std::string> v = cStringTokenizer(vstr).asVector();
and
const char *str = "34 42 13 46 72 41"; std::vector<int> v = cStringTokenizer().asIntVector(); const char *str = "0.4311 0.7402 0.7134"; std::vector<double> v = cStringTokenizer().asDoubleVector();
The following example processes the string by enumerating the tokens:
const char *str = "3.25 1.83 34 X 19.8"; // input std::vector<double> result; cStringTokenizer tokenizer(str); while (tokenizer.hasMoreTokens()) { const char *token = tokenizer.nextToken(); if (strcmp(token, "X")==0) result.push_back(DEFAULT_VALUE); else result.push_back(atof(token)); }
It is possible for modules to be notified when the value of a parameter changes at runtime, possibly due to another module dynamically changing it. The typical action is to re-read the parameter, and update the module's state if needed.
To enable notification, redefine the handleParameterChange() method of the module class. This method will be called back by the simulation kernel with the parameter name as argument every time a new value is assigned to a parameter. The method signature is the following:
void handleParameterChange(const char *parameterName);
The following example shows a module that re-reads its serviceTime parameter when its value changes:
void Queue::handleParameterChange(const char *parameterName) { if (strcmp(parameterName, "serviceTime") == 0) serviceTime = par("serviceTime"); // refresh data member }
Notifications are suppressed while the network (or module) is being set
up.
handleParameterChange() methods need to be implemented carefully, because they may be called at a time when the module has not yet completed all initialization stages.
Also, be extremely careful when changing parameters from inside handleParameterChange(), because it is easy to accidentally create an infinite notification loop.
Module gates are represented by cGate objects. Gate objects know to which other gates they are connected, and what are the channel objects associated with the links.
The cModule class has a number of member functions that deal with gates. You can look up a gate by name using the gate() method:
cGate *outGate = gate("out");
This works for input and output gates. However, when a gate was declared inout in NED, it is actually represented by the simulation kernel with two gates, so the above call would result in a gate not found error. The gate() method needs to be told whether the input or the output half of the gate you need. This can be done by appending the "$i" or "$o" to the gate name. The following example retrieves the two gates for the inout gate "g":
cGate *gIn = gate("g$i"); cGate *gOut = gate("g$o");
Another way is to use the gateHalf() function, which takes the inout gate's name plus either cGate::INPUT or cGate::OUTPUT:
cGate *gIn = gateHalf("g", cGate::INPUT); cGate *gOut = gateHalf("g", cGate::OUTPUT);
These methods throw an error if the gate does not exist, so they cannot be used to determine whether the module has a particular gate. For that purpose there is a hasGate() method. An example:
if (hasGate("optOut")) send(new cMessage(), "optOut");
A gate can also be identified and looked up by a numeric gate ID. You can get the ID from the gate itself (getId() method), or from the module by gate name (findGate() method). The gate() method also has an overloaded variant which returns the gate from the gate ID.
int gateId = gate("in")->getId(); // or: int gateId = findGate("in");
As gate IDs are more useful with gate vectors, we'll cover them in detail in a later section.
Gate vectors possess one cGate object per element. To access individual gates in the vector, you need to call the gate() function with an additional index parameter. The index should be between zero and size-1. The size of the gate vector can be read with the gateSize() method. The following example iterates through all elements in the gate vector:
for (int i = 0; i < gateSize("out"); i++) { cGate *gate = gate("out", i); //... }
A gate vector cannot have “holes” in it; that is, gate() never returns nullptr or throws an error if the gate vector exists and the index is within bounds.
For inout gates, gateSize() may be called with or without the "$i"/"$o" suffix, and returns the same number.
The hasGate() method may be used both with and without an index, and they mean two different things: without an index it tells the existence of a gate vector with the given name, regardless of its size (it returns true for an existing vector even if its size is currently zero!); with an index it also examines whether the index is within the bounds.
A gate can also be accessed by its ID. A very important property of gate IDs is that they are contiguous within a gate vector, that is, the ID of a gate g[k] can be calculated as the ID of g[0] plus k. This allows you to efficiently access any gate in a gate vector, because retrieving a gate by ID is more efficient than by name and index. The index of the first gate can be obtained with gate("out",0)->getId(), but it is better to use a dedicated method, gateBaseId(), because it also works when the gate vector size is zero.
Two further important properties of gate IDs: they are stable and unique (within the module). By stable we mean that the ID of a gate never changes; and by unique we not only mean that at any given time no two gates have the same IDs, but also that IDs of deleted gates do not get reused later, so gate IDs are unique in the lifetime of a simulation run.
The following example iterates through a gate vector, using IDs:
int baseId = gateBaseId("out"); int size = gateSize("out"); for (int i = 0; i < size; i++) { cGate *gate = gate(baseId + i); //... }
If you need to go through all gates of a module, there are two possibilities. One is invoking the getGateNames() method that returns the names of all gates and gate vectors the module has; then you can call isGateVector(name) to determine whether individual names identify a scalar gate or a gate vector; then gate vectors can be enumerated by index. Also, for inout gates getGateNames() returns the base name without the "$i"/"$o" suffix, so the two directions need to be handled separately. The gateType(name) method can be used to test whether a gate is inout, input or output (it returns cGate::INOUT, cGate::INPUT, or cGate::OUTPUT).
Clearly, the above solution can be quite difficult. An alternative is to use the GateIterator class provided by cModule. It goes like this:
for (cModule::GateIterator i(this); !i.end(); i++) { cGate *gate = *i; ... }
Where this denotes the module whose gates are being enumerated (it can be replaced by any cModule * variable).
Although rarely needed, it is possible to add and remove gates during simulation. You can add scalar gates and gate vectors, change the size of gate vectors, and remove scalar gates and whole gate vectors. It is not possible to remove individual random gates from a gate vector, to remove one half of an inout gate (e.g. "gate$o"), or to set different gate vector sizes on the two halves of an inout gate vector.
The cModule methods for adding and removing gates are addGate(name,type,isvector=false) and deleteGate(name). Gate vector size can be changed by using setGateSize(name,size). None of these methods accept "$i" / "$o" suffix in gate names.
The getName() method of cGate returns the name of the gate or gate vector without the index. If you need a string that contains the gate index as well, getFullName() is what you want. If you also want to include the hierarchical name of the owner module, call getFullPath().
The getType() method of cGate returns the gate type, either cGate::INPUT or cGate::OUTPUT. (It cannot return cGate::INOUT, because an inout gate is represented by a pair of cGates.)
If you have a gate that represents half of an inout gate (that is, getName() returns something like "g$i" or "g$o"), you can split the name with the getBaseName() and getNameSuffix() methods. getBaseName() method returns the name without the $i/$o suffix; and getNameSuffix() returns just the suffix (including the dollar sign). For normal gates, getBaseName() is the same as getName(), and getNameSuffix() returns the empty string.
The isVector(), getIndex(), getVectorSize() speak for themselves; size() is an alias to getVectorSize(). For non-vector gates, getIndex() returns 0 and getVectorSize() returns 1.
The getId() method returns the gate ID (not to be confused with the gate index).
The getOwnerModule() method returns the module the gate object belongs to.
To illustrate these methods, we expand the gate iterator example to print some information about each gate:
for (cModule::GateIterator i(this); !i.end(); i++) { cGate *gate = *i; EV << gate->getFullName() << ": "; EV << "id=" << gate->getId() << ", "; if (!gate->isVector()) EV << "scalar gate, "; else EV << "gate " << gate->getIndex() << " in vector " << gate->getName() << " of size " << gate->getVectorSize() << ", "; EV << "type:" << cGate::getTypeName(gate->getType()); EV << "\n"; }
There are further cGate methods to access and manipulate the connection(s) attached to the gate; they will be covered in the following sections.
Simple module gates have normally one connection attached. Compound module gates, however, need to be connected both inside and outside of the module to be useful. A series of connections (joined with compound module gates) is called a connection path or just path. A path is directed, and it normally starts at an output gate of a simple module, ends at an input gate of a simple module, and passes through several compound module gates.
Every cGate object contains pointers to the previous gate and the next gate in the path (returned by the getPreviousGate() and getNextGate() methods), so a path can be thought of as a double-linked list.
The use of the previous gate and next gate pointers with various gate types is illustrated on figure below.
The start and end gates of the path can be found with the getPathStartGate() and getPathEndGate() methods, which simply follow the previous gate and next gate pointers, respectively, until they are nullptr.
The isConnectedOutside() and isConnectedInside() methods return whether a gate is connected on the outside or on the inside. They examine either the previous or the next pointer, depending on the gate type (input or output). For example, an output gate is connected outside if the next pointer is non-nullptr; the same function for an input gate checks the previous pointer. Again, see figure below for an illustration.
The isConnected() method is a bit different: it returns true if the gate is fully connected, that is, for a compound module gate both inside and outside, and for a simple module gate, outside.
The following code prints the name of the gate a simple module gate is connected to:
cGate *gate = gate("somegate"); cGate *otherGate = gate->getType()==cGate::OUTPUT ? gate->getNextGate() : gate->getPreviousGate(); if (otherGate) EV << "gate is connected to: " << otherGate->getFullPath() << endl; else EV << "gate not connected" << endl;
The channel object associated with a connection is accessible by a pointer stored at the source gate of the connection. The pointer is returned by the getChannel() method of the gate:
cChannel *channel = gate->getChannel();
The result may be nullptr, that is, a connection may not have an associated channel object.
If you have a channel pointer, you can get back its source gate with the getSourceGate() method:
cGate *gate = channel->getSourceGate();
cChannel is just an abstract base class for channels, so to access details of the channel you might need to cast the resulting pointer into a specific channel class, for example cDelayChannel or cDatarateChannel.
Another specific channel type is cIdealChannel, which basically does nothing: it acts as if there was no channel object assigned to the connection. OMNeT++ sometimes transparently inserts a cIdealChannel into a channel-less connection, for example to hold the display string associated with the connection.
Often you are not really interested in a specific connection's channel, but rather in the transmission channel (see [4.7.6]) of the connection path that starts at a specific output gate. The transmission channel can be found by following the connection path until you find a channel whose isTransmissionChannel() method returns true, but cGate has a convenience method for this, named getTransmissionChannel(). An example usage:
cChannel *txChan = gate("ppp$o")->getTransmissionChannel();
A complementer method to getTransmissionChannel() is getIncomingTransmissionChannel(); it is usually invoked on input gates, and searches the connection path in reverse direction.
cChannel *incomingTxChan = gate("ppp$i")->getIncomingTransmissionChannel();
Both methods throw an error if no transmission channel is found. If this is not suitable, use the similar findTransmissionChannel() and findIncomingTransmissionChannel() methods that simply return nullptr in that case.
Channels are covered in more detail in section [4.8].
On an abstract level, an OMNeT++ simulation model is a set of simple modules that communicate with each other via message passing. The essence of simple modules is that they create, send, receive, store, modify, schedule and destroy messages -- the rest of OMNeT++ exists to facilitate this task, and collect statistics about what was going on.
Messages in OMNeT++ are instances of the cMessage class or one of its subclasses. Network packets are represented with cPacket, which is also subclassed from cMessage. Message objects are created using the C++ new operator, and destroyed using the delete operator when they are no longer needed.
Messages are described in detail in chapter [5]. At this point, all we need to know about them is that they are referred to as cMessage * pointers. In the examples below, messages will be created with new cMessage("foo") where "foo" is a descriptive message name, used for visualization and debugging purposes.
Nearly all simulation models need to schedule future events in order to implement timers, timeouts, delays, etc. Some typical examples:
In OMNeT++, you solve such tasks by letting the simple module send a message to itself; the message would be delivered to the simple module at a later point of time. Messages used this way are called self-messages, and the module class has special methods for them that allow for implementing self-messages without gates and connections.
The module can send a message to itself using the scheduleAt() function. scheduleAt() accepts an absolute simulation time:
scheduleAt(t, msg);
Since the target time is often relative to the current simulation time, the function has another variant, scheduleAfter(), which takes a delta instead of an absolute simulation time. The following calls are equivalent:
scheduleAt(simTime()+delta, msg); scheduleAfter(delta, msg);
Self-messages are delivered to the module in the same way as other messages (via the usual receive calls or handleMessage()); the module may call the isSelfMessage() member of any received message to determine if it is a self-message.
You can determine whether a message is currently in the FES by calling its isScheduled() member function.
Scheduled self-messages can be cancelled (i.e. removed from the FES). This feature facilitates implementing timeouts.
cancelEvent(msg);
The cancelEvent() function takes a pointer to the message to be cancelled, and also returns the same pointer. After having it cancelled, you may delete the message or reuse it in subsequent scheduleAt() calls. cancelEvent() has no effect if the message is not scheduled at that time.
There is also a convenience method called cancelAndDelete() implemented as if (msg!=nullptr) delete cancelEvent(msg); this method is primarily useful for writing destructors.
The following example shows how to implement a timeout in a simple imaginary stop-and-wait protocol. The code utilizes a timeoutEvent module class data member that stores the pointer of the cMessage used as self-message, and compares it to the pointer of the received message to identify whether a timeout has occurred.
void Protocol::handleMessage(cMessage *msg) { if (msg == timeoutEvent) { // timeout expired, re-send packet and restart timer send(currentPacket->dup(), "out"); scheduleAt(simTime() + timeout, timeoutEvent); } else if (...) { // if acknowledgement received // cancel timeout, prepare to send next packet, etc. cancelEvent(timeoutEvent); ... } else { ... } }
To reschedule an event which is currently scheduled to a different simulation time, it first needs to be cancelled using cancelEvent(). This is shown in the following example code:
if (msg->isScheduled()) cancelEvent(msg); scheduleAt(simTime() + delay, msg);
For convenience, the above functionality is available as a single call, as the functions rescheduleAt() and rescheduleAfter(). The first one takes an absolute simulation time, the second one a delta relative to the current simulation time.
rescheduleAt(t, msg); rescheduleAfter(delta, msg);
Using these dedicated functions may be more efficient than the cancelEvent()+scheduleAt() combo.
Once created, a message object can be sent through an output gate using one of the following functions:
send(cMessage *msg, const char *gateName, int index=0); send(cMessage *msg, int gateId); send(cMessage *msg, cGate *gate);
In the first function, the argument gateName is the name of the gate the message has to be sent through. If this gate is a vector gate, index determines though which particular output gate this has to be done; otherwise, the index argument is not needed.
The second and third functions use the gate ID and the pointer to the gate object. They are faster than the first one because they don't have to search for the gate by name.
Examples:
send(msg, "out"); send(msg, "outv", i); // send via a gate in a gate vector
To send via an inout gate, remember that an inout gate is an input and an output gate glued together, and the two halves can be identified with the $i and $o name suffixes. Thus, the gate name needs to be specified in the send() call with the $o suffix:
send(msg, "g$o"); send(msg, "g$o", i); // if "g[]" is a gate vector
When implementing broadcasts or retransmissions, two frequently occurring tasks in protocol simulation, you might feel tempted to use the same message in multiple send() operations. Do not do it -- you cannot send the same message object multiple times. Instead, duplicate the message object.
Why? A message is like a real-world object -- it cannot be at two places at the same time. Once sent out, the message no longer belongs to the module: it is taken over by the simulation kernel, and will eventually be delivered to the destination module. The sender module should not even refer to its pointer any more. Once the message arrives in the destination module, that module will have full authority over it -- it can send it on, destroy it immediately, or store it for further handling. The same applies to messages that have been scheduled -- they belong to the simulation kernel until they are delivered back to the module.
To enforce the rules above, all message sending functions check that the
module actually owns the message it is about to send. If the message is in
another module, in a queue, currently scheduled, etc., a runtime error
will be generated: not owner of message.
In your model, you may need to broadcast a message to several destinations. Broadcast can be implemented in a simple module by sending out copies of the same message, for example on every gate of a gate vector. As described above, you cannot use the same message pointer for in all send() calls -- what you have to do instead is create copies (duplicates) of the message object and send them.
Example:
for (int i = 0; i < n; i++) { cMessage *copy = msg->dup(); send(copy, "out", i); } delete msg;
You might have noticed that copying the message for the last gate is redundant: we can just send out the original message there. Also, we can utilize gate IDs to avoid looking up the gate by name for each send operation. We can exploit the fact that the ID of gate k in a gate vector can be produced as baseID + k. The optimized version of the code looks like this:
int outGateBaseId = gateBaseId("out"); for (int i = 0; i < n; i++) send(i==n-1 ? msg : msg->dup(), outGateBaseId+i);
Many communication protocols involve retransmissions of packets (frames). When implementing retransmissions, you cannot just hold a pointer to the same message object and send it again and again -- you'd get the not owner of message error on the first resend.
Instead, for (re)transmission, you should create and send copies of the message, and retain the original. When you are sure there will not be any more retransmission, you can delete the original message.
Creating and sending a copy:
// (re)transmit packet: cMessage *copy = packet->dup(); send(copy, "out");
and finally (when no more retransmissions will occur):
delete packet;
Sometimes it is necessary for module to hold a message for some time interval, and then send it. This can be achieved with self-messages, but there is a more straightforward method: delayed sending. The following methods are provided for delayed sending:
sendDelayed(cMessage *msg, double delay, const char *gateName, int index); sendDelayed(cMessage *msg, double delay, int gateId); sendDelayed(cMessage *msg, double delay, cGate *gate);
The arguments are the same as for send(), except for the extra delay parameter. The delay value must be non-negative. The effect of the function is similar to as if the module had kept the message for the delay interval and sent it afterwards; even the sending time timestamp of the message will be set to the current simulation time plus delay.
A example call:
sendDelayed(msg, 0.005, "out");
The sendDelayed() function does not internally perform a scheduleAt() followed by a send(), but rather it computes everything about the message sending up front, including the arrival time and the target module. This has two consequences. First, sendDelayed() is more efficient than a scheduleAt() followed by a send() because it eliminates one event. The second, less pleasant consequence is that changes in the connection path during the delay will not be taken into account (because everything is calculated in advance, before the changes take place).
Therefore, despite its performance advantage, you should think twice before using sendDelayed() in a simulation model. It may have its place in a one-shot simulation model that you know is static, but it certainly should be avoided in reusable modules that need to work correctly in a wide variety of simulation models.
At times it is covenient to be able to send a message directly to an input gate of another module. The sendDirect() function is provided for this purpose.
This function has several flavors. The first set of sendDirect() functions accept a message and a target gate; the latter can be specified in various forms:
sendDirect(cMessage *msg, cModule *mod, int gateId) sendDirect(cMessage *msg, cModule *mod, const char *gateName, int index=-1) sendDirect(cMessage *msg, cGate *gate)
An example for direct sending:
cModule *targetModule = getParentModule()->getSubmodule("node2"); sendDirect(new cMessage("msg"), targetModule, "in");
At the target module, there is no difference between messages received directly and those received over connections.
The target gate must be an unconnected gate; in other words, modules must have dedicated gates to be able to receive messages sent via sendDirect(). You cannot have a gate which receives messages via both connections and sendDirect().
It is recommended to tag gates dedicated for receiving messages via sendDirect() with the @directIn property in the module's NED declaration. This will cause OMNeT++ not to complain that the gate is not connected in the network or compound module where the module is used.
An example:
simple Radio { gates: input radioIn @directIn; // for receiving air frames }
The target module is usually a simple module, but it can also be a compound module. The message will follow the connections that start at the target gate, and will be delivered to the module at the end of the path -- just as with normal connections. The path must end in a simple module.
It is even permitted to send to an output gate, which will also cause the message to follow the connections starting at that gate. This can be useful, for example, when several submodules are sending to a single output gate of their parent module.
A second set of sendDirect() methods accept a propagation delay and a transmission duration as parameters as well:
sendDirect(cMessage *msg, simtime_t propagationDelay, simtime_t duration, cModule *mod, int gateId) sendDirect(cMessage *msg, simtime_t propagationDelay, simtime_t duration, cModule *mod, const char *gateName, int index=-1) sendDirect(cMessage *msg, simtime_t propagationDelay, simtime_t duration, cGate *gate)
The transmission duration parameter is important when the message is also a packet (instance of cPacket). For messages that are not packets (not subclassed from cPacket), the duration parameter is ignored.
If the message is a packet, the duration will be written into the packet, and can be read by the receiver with the getDuration() method of the packet.
The receiver module can choose whether it wants the simulation kernel to deliver the packet object to it at the start or at the end of the reception. The default is the latter; the module can change it by calling setDeliverImmediately() on the final input gate, that is, on targetGate->getPathEndGate().
When a message is sent out on a gate, it usually travels through a series of connections until it arrives at the destination module. We call this series of connections a connection path.
Several connections in the path may have an associated channel,
but there can be only one channel per path that models nonzero
transmission duration. This restriction is enforced by the simulation
kernel. This channel is called the transmission channel.
Packets may only be sent when the transmission channel is idle. This means that after each transmission, the sender module needs to wait until the channel has finished transmitting before it can send another packet.
You can get a pointer to the transmission channel by calling the getTransmissionChannel() method on the output gate. The channel's isBusy() and getTransmissionFinishTime() methods can tell you whether a channel is currently transmitting, and when the transmission is going to finish. (When the latter is less or equal the current simulation time, the channel is free.) If the channel is currently busy, sending needs to be postponed: the packet can be stored in a queue, and a timer (self-message) can be scheduled for the time when the channel becomes empty.
A code example to illustrate the above process:
cPacket *pkt = ...; // packet to be transmitted cChannel *txChannel = gate("out")->getTransmissionChannel(); simtime_t txFinishTime = txChannel->getTransmissionFinishTime(); if (txFinishTime <= simTime()) { // channel free; send out packet immediately send(pkt, "out"); } else { // store packet and schedule timer; when the timer expires, // the packet should be removed from the queue and sent out txQueue.insert(pkt); scheduleAt(txFinishTime, endTxMsg); }
The getTransmissionChannel() method searches the connection path each time it is called. If performance is important, it is a good idea to obtain the transmission channel pointer once, and then cache it. When the network topology changes, the cached channel pointer needs to be updated; section [4.14.3] describes the mechanism that can be used to get notifications about topology changes.
As a result of error modeling in the channel, the packet may arrive with the bit error flag set (hasBitError() method. It is the receiver module's responsibility to examine this flag and take appropriate action (i.e. discard the packet).
Normally the packet object gets delivered to the destination module at the simulation time that corresponds to finishing the reception of the message (ie. the arrival of its last bit). However, the receiver module may change this by “reprogramming” the receiver gate with the setDeliverImmediately() method:
gate("in")->setDeliverImmediately(true);
This method may only be called on simple module input gates, and it instructs the simulation kernel to deliver packets arriving through that gate at the simulation time that corresponds to the beginning of the reception process. getDeliverOnReceptionStart() only needs to be called once, so it is usually done in the initialize() method of the module.
When a packet is delivered to the module, the packet's isReceptionStart() method can be called to determine whether it corresponds to the start or end of the reception process (it should be the same as the getDeliverOnReceptionStart() flag of the input gate), and getDuration() returns the transmission duration.
The following example code prints the start and end times of a packet reception:
simtime_t startTime, endTime; if (pkt->isReceptionStart()) { // gate was reprogrammed with setDeliverImmediately(true) startTime = pkt->getArrivalTime(); // or: simTime(); endTime = startTime + pkt->getDuration(); } else { // default case endTime = pkt->getArrivalTime(); // or: simTime(); startTime = endTime - pkt->getDuration(); } EV << "interval: " << startTime << ".." << endTime << "\n";
Note that this works with wireless connections (sendDirect()) as well; there, the duration is an argument to the sendDirect() call.
Certain protocols, for example Ethernet require the ability to abort a transmission before it completes. The support OMNeT++ provides for this task is the forceTransmissionFinishTime() channel method. This method forcibly overwrites the transmissionFinishTime member of the channel with the given value, allowing the sender to transmit another packet without raising the “channel is currently busy” error. The receiving party needs to be notified about the aborted transmission by some external means, for example by sending another packet or an out-of-band message.
Message sending is implemented like this: the arrival time and the bit error flag of a message are calculated right inside the send() call, then the message is inserted into the FES with the calculated arrival time. The message does not get scheduled individually for each link. This implementation was chosen because of its run-time efficiency.
This is not a huge problem in practice, but if it is important to model channels with changing parameters, the solution is to insert simple modules into the path to ensure strict scheduling.
The code which inserts the message into the FES is the arrived() method of the recipient module. By overriding this method it is possible to perform custom processing at the recipient module immediately, still from within the send() call. Use only if you know what you are doing!
activity()-based modules receive messages with the receive() method of cSimpleModule. receive() cannot be used with handleMessage()-based modules.
cMessage *msg = receive();
The receive() function accepts an optional timeout
parameter. (This is a delta, not an
absolute simulation time.) If no message arrives within the timeout
period, the function returns nullptr.
simtime_t timeout = 3.0; cMessage *msg = receive(timeout); if (msg==nullptr) { ... // handle timeout } else { ... // process message }
The wait() function suspends the execution of the module for a given amount of simulation time (a delta). wait() cannot be used with handleMessage()-based modules.
wait(delay);
In other simulation software, wait() is often called hold. Internally, the wait() function is implemented by a scheduleAt() followed by a receive(). The wait() function is very convenient in modules that do not need to be prepared for arriving messages, for example message generators. An example:
for (;;) { // wait for some, potentially random, amount of time, specified // in the interarrivalTime volatile module parameter wait(par("interarrivalTime").doubleValue()); // generate and send message ... }
It is a runtime error if a message arrives during the wait interval. If you expect messages to arrive during the wait period, you can use the waitAndEnqueue() function. It takes a pointer to a queue object (of class cQueue, described in chapter [7]) in addition to the wait interval. Messages that arrive during the wait interval are accumulated in the queue, and they can be processed after the waitAndEnqueue() call returns.
cQueue queue("queue"); ... waitAndEnqueue(waitTime, &queue); if (!queue.empty()) { // process messages arrived during wait interval ... }
Channels encapsulate parameters and behavior associated with connections. Channel types are like simple modules, in the sense that they are declared in NED, and there are C++ implementation classes behind them. Section [3.5] describes NED language support for channels, and explains how to associate C++ classes with channel types declared in NED.
C++ channel classes must subclass from the abstract base class cChannel. However, when creating a new channel class, it may be more practical to extend one of the existing C++ channel classes behind the three predefined NED channel types:
Channel classes need to be registered with the Define_Channel() macro, just like simple module classes need Define_Module().
The channel base class cChannel inherits from cComponent, so channels participate in the initialization and finalization protocol (initialize() and finish()) described in [4.3.3].
The parent module of a channel (as returned by the getParentModule()) is the module that contains the connection. If a connection connects two modules that are children of the same compound module, the channel's parent is the compound module. If the connection connects a compound module to one of its submodules, the channel's parent is also the compound module.
When subclassing Channel, the following pure virtual member functions need to be overridden:
The first two functions are usually one-liners; the channel behavior is encapsulated in the third function, processMessage().
The first function, isTransmissionChannel(), determines whether the channel is a transmission channel, i.e. one that models transmission duration. A transmission channel sets the duration field of packets sent through it (see the setDuration() field of cPacket).
The getTransmissionFinishTime() function is only used with transmission channels, and it should return the simulation time the sender will finish (or has finished) transmitting. This method is called by modules that send on a transmission channel to find out when the channel becomes available. The channel's isBusy() method is implemented simply as return getTransmissionFinishTime() < simTime(). For non-transmission channels, the getTransmissionFinishTime() return value may be any simulation time which is less than or equal to the current simulation time.
The third function, processMessage() encapsulates the channel's functionality. However, before going into the details of this function we need to understand how OMNeT++ handles message sending on connections.
Inside the send() call, OMNeT++ follows the connection path denoted by the getNextGate() functions of gates, until it reaches the target module. At each “hop”, the corresponding connection's channel (if the connection has one) gets a chance to add to the message's arrival time (propagation time modeling), calculate a transmission duration, and to modify the message object in various ways, such as set the bit error flag in it (bit error modeling). After processing all hops that way, OMNeT++ inserts the message object into the Future Events Set (FES, see section [4.1.2]), and the send() call returns. Then OMNeT++ continues to process events in increasing timestamp order. The message will be delivered to the target module's handleMessage() (or receive()) function when it gets to the front of the FES.
A few more details: a channel may instruct OMNeT++ to delete the message instead of inserting it into the FES; this can be useful to model disabled channels, or to model that the message has been lost altogether. The getDeliverOnReceptionStart() flag of the final gate in the path will determine whether the transmission duration will be added to the arrival time or not. Packet transmissions have been described in section [4.7.6].
Now, back to the processMessage() method.
The method gets called as part of the above process, when the message is processed at the given hop. The method's arguments are the message object, the simulation time the beginning of the message will reach the channel (i.e. the sum of all previous propagation delays), and a struct in which the method can return the results.
The result_t struct is an inner type of cChannel, and looks like this:
struct result_t { simtime_t delay; // propagation delay simtime_t duration; // transmission duration bool discard; // whether the channel has lost the message };
It also has a constructor that initializes all fields to zero; it is left out for brevity.
The method should model the transmission of the given message starting at the given t time, and store the results (propagation delay, transmission duration, deletion flag) in the result object. Only the relevant fields in the result object need to be changed, others can be left untouched.
Transmission duration and bit error modeling only applies to packets (i.e. to instances of cPacket, where cMessage's isPacket() returns true); it should be skipped for non-packet messages. processMessage() does not need to call the setDuration() method on the packet; this is done by the simulation kernel. However, it should call setBitError(true) on the packet if error modeling results in bit errors.
If the method sets the discard flag in the result object, that means that the message object will be deleted by OMNeT++; this facility can be used to model that the message gets lost in the channel.
The processMessage() method does not need to throw error on overlapping transmissions, or if the packet's duration field is already set; these checks are done by the simulation kernel before processMessage() is called.
To illustrate coding channel behavior, we look at how the built-in channel types are implemented.
cIdealChannel lets through messages and packets without any delay or change. Its isTransmissionChannel() method returns false, getTransmissionFinishTime() returns 0s, and the body of its processMessage() method is empty:
void cIdealChannel::processMessage(cMessage *msg, simtime_t t, result_t& result) { }
cDelayChannel implements propagation delay, and it can be disabled; in its disabled state, messages sent though it will be discarded. This class still models zero transmission duration, so its isTransmissionChannel() and getTransmissionFinishTime() methods still return false and 0s. The processMessage() method sets the appropriate fields in the result_t struct:
void cDelayChannel::processMessage(cMessage *msg, simtime_t t, result_t& result) { // if channel is disabled, signal that message should be deleted result.discard = isDisabled; // propagation delay modeling result.delay = delay; }
The handleParameterChange() method is also redefined, so that
the channel can update its internal delay and isDisabled
data members if the corresponding channel parameters change during simulation.
cDatarateChannel is different. It performs model packet duration (duration is calculated from the data rate and the length of the packet), so isTransmissionChannel() returns true. getTransmissionFinishTime() returns the value of a txfinishtime data member, which gets updated after every packet.
simtime_t cDatarateChannel::getTransmissionFinishTime() const { return txfinishtime; }
cDatarateChannel's processMessage() method makes use of the isDisabled, datarate, ber and per data members, which are also kept up to date with the help of handleParameterChange().
void cDatarateChannel::processMessage(cMessage *msg, simtime_t t, result_t& result) { // if channel is disabled, signal that message should be deleted if (isDisabled) { result.discard = true; return; } // datarate modeling if (datarate!=0 && msg->isPacket()) { simtime_t duration = ((cPacket *)msg)->getBitLength() / datarate; result.duration = duration; txfinishtime = t + duration; } else { txfinishtime = t; } // propagation delay modeling result.delay = delay; // bit error modeling if ((ber!=0 || per!=0) && msg->isPacket()) { cPacket *pkt = (cPacket *)msg; if (ber!=0 && dblrand() < 1.0 - pow(1.0-ber, (double)pkt->getBitLength()) pkt->setBitError(true); if (per!=0 && dblrand() < per) pkt->setBitError(true); } }
You can finish the simulation with the endSimulation() function:
endSimulation();
endSimulation() is rarely needed in practice because you can specify simulation time and CPU time limits in the ini file (see later).
When the simulation encounters an error condition, it can throw a cRuntimeError exception to terminate the simulation with an error message. (Under Cmdenv, the exception also causes a nonzero program exit code). The cRuntimeError class has a constructor with a printf()-like argument list. An example:
if (windowSize <= 0) throw cRuntimeError("Invalid window size %d; must be >=1", windowSize);
Do not include newline (\n), period or exclamation mark in the error text; it will be added by OMNeT++.
The same effect can be achieved by calling the error() method of cModule:
if (windowSize <= 0) error("Invalid window size %d; must be >=1", windowSize);
Of course, the error() method can only be used when a module pointer is available.
Finite State Machines (FSMs) can make life with handleMessage() easier. OMNeT++ provides a class and a set of macros to build FSMs.
The key points are:
OMNeT++'s FSMs can be nested. This means that any state (or rather, its entry or exit code) may contain a further full-fledged FSM_Switch() (see below). This allows you to introduce sub-states and thereby bring some structure into the state space if it becomes too large.
FSM state is stored in an object of type cFSM. The possible states are defined by an enum; the enum is also a place to define which state is transient and which is steady. In the following example, SLEEP and ACTIVE are steady states and SEND is transient (the numbers in parentheses must be unique within the state type and they are used for constructing the numeric IDs for the states):
enum { INIT = 0, SLEEP = FSM_Steady(1), ACTIVE = FSM_Steady(2), SEND = FSM_Transient(1), };
The actual FSM is embedded in a switch-like statement, FSM_Switch(), with cases for entering and leaving each state:
FSM_Switch(fsm) { case FSM_Exit(state1): //... break; case FSM_Enter(state1): //... break; case FSM_Exit(state2): //... break; case FSM_Enter(state2): //... break; //... };
State transitions are done via calls to FSM_Goto(), which simply stores the new state in the cFSM object:
FSM_Goto(fsm, newState);
The FSM starts from the state with the numeric code 0; this state is conventionally named INIT.
FSMs can log their state transitions, with the output looking like this:
... FSM GenState: leaving state SLEEP FSM GenState: entering state ACTIVE ... FSM GenState: leaving state ACTIVE FSM GenState: entering state SEND FSM GenState: leaving state SEND FSM GenState: entering state ACTIVE ... FSM GenState: leaving state ACTIVE FSM GenState: entering state SLEEP ...
To enable the above output, define FSM_DEBUG before including omnetpp.h.
#define FSM_DEBUG // enables debug output from FSMs #include <omnetpp.h>
FSMs perform their logging via the FSM_Print() macro, defined as something like this:
#define FSM_Print(fsm,exiting) (EV << "FSM " << (fsm).getName() << ((exiting) ? ": leaving state " : ": entering state ") << (fsm).getStateName() << endl)
The log output format can be changed by undefining FSM_Print() after the inclusion of omnetpp.ini, and providing a new definition.
FSM_Switch() is a macro. It expands to a switch statement embedded in a for() loop which repeats until the FSM reaches a steady state.
Infinite loops are avoided by counting state transitions: if an FSM goes through 64 transitions without reaching a steady state, the simulation will terminate with an error message.
Let us write another bursty packet generator. It will have two states, SLEEP and ACTIVE. In the SLEEP state, the module does nothing. In the ACTIVE state, it sends messages with a given inter-arrival time. The code was taken from the Fifo2 sample simulation.
#define FSM_DEBUG #include <omnetpp.h> using namespace omnetpp; class BurstyGenerator : public cSimpleModule { protected: // parameters double sleepTimeMean; double burstTimeMean; double sendIATime; cPar *msgLength; // FSM and its states cFSM fsm; enum { INIT = 0, SLEEP = FSM_Steady(1), ACTIVE = FSM_Steady(2), SEND = FSM_Transient(1), }; // variables used int i; cMessage *startStopBurst; cMessage *sendMessage; // the virtual functions virtual void initialize(); virtual void handleMessage(cMessage *msg); }; Define_Module(BurstyGenerator); void BurstyGenerator::initialize() { fsm.setName("fsm"); sleepTimeMean = par("sleepTimeMean"); burstTimeMean = par("burstTimeMean"); sendIATime = par("sendIATime"); msgLength = &par("msgLength"); i = 0; WATCH(i); // always put watches in initialize() startStopBurst = new cMessage("startStopBurst"); sendMessage = new cMessage("sendMessage"); scheduleAt(0.0,startStopBurst); } void BurstyGenerator::handleMessage(cMessage *msg) { FSM_Switch(fsm) { case FSM_Exit(INIT): // transition to SLEEP state FSM_Goto(fsm,SLEEP); break; case FSM_Enter(SLEEP): // schedule end of sleep period (start of next burst) scheduleAt(simTime()+exponential(sleepTimeMean), startStopBurst); break; case FSM_Exit(SLEEP): // schedule end of this burst scheduleAt(simTime()+exponential(burstTimeMean), startStopBurst); // transition to ACTIVE state: if (msg!=startStopBurst) { error("invalid event in state ACTIVE"); } FSM_Goto(fsm,ACTIVE); break; case FSM_Enter(ACTIVE): // schedule next sending scheduleAt(simTime()+exponential(sendIATime), sendMessage); break; case FSM_Exit(ACTIVE): // transition to either SEND or SLEEP if (msg==sendMessage) { FSM_Goto(fsm,SEND); } else if (msg==startStopBurst) { cancelEvent(sendMessage); FSM_Goto(fsm,SLEEP); } else { error("invalid event in state ACTIVE"); } break; case FSM_Exit(SEND): { // generate and send out job char msgname[32]; sprintf(msgname, "job-%d", ++i); EV << "Generating " << msgname << endl; cMessage *job = new cMessage(msgname); job->setBitLength((long) *msgLength); job->setTimestamp(); send(job, "out"); // return to ACTIVE FSM_Goto(fsm,ACTIVE); break; } } }
If a module is part of a module vector, the getIndex() and getVectorSize() member functions can be used to query its index and the vector size:
EV << "This is module [" << module->getIndex() << "] in a vector of size [" << module->getVectorSize() << "].\n";
Every component (module and channel) in the network has an ID that can be obtained from cComponent's getId() member function:
int componentId = getId();
IDs uniquely identify a module or channel for the whole duration of the simulation. This holds even when modules are created and destroyed dynamically, because IDs of deleted modules or channels are never reused for newly created ones.
To look up a component by ID, one needs to use methods of the simulation manager object, cSimulation. getComponent() expects an ID, and returns the component's pointer if the component still exists, otherwise it returns nullptr. The method has two variations, getModule(id) and getChannel(id). They return cModule and cChannel pointers if the identified component is in fact a module or a channel, respectively, otherwise they return nullptr.
int id = 100; cModule *mod = getSimulation()->getModule(id); // exists, and is a module
The parent module can be accessed by the getParentModule() member function:
cModule *parent = getParentModule();
For example, the parameters of the parent module are accessed like this:
double timeout = getParentModule()->par("timeout");
cModule's findSubmodule() and getSubmodule() member functions make it possible to look up the module's submodules by name (or name and index if the submodule is in a module vector). The first one returns the module ID of the submodule, and the latter returns the module pointer. If the submodule is not found, they return -1 or nullptr, respectively.
int submodID = module->findSubmodule("foo", 3); // look up "foo[3]" cModule *submod = module->getSubmodule("foo", 3);
cModule's getModuleByPath() member function can be used to
find modules by relative or absolute path. It accepts a path string, and
returns the pointer of the matching module, or throws an exception if it
was not found. If it is not known in advance whether the module exists,
its companion function findModuleByPath() can be used.
findModuleByPath() returns nullptr if the module
identified by the path does not exist, but otherwise behaves identically
to getModuleByPath().
The path is dot-separated list of module names. The special module name ^ (caret) stands for the parent module. If the path starts with a dot or caret, it is understood as relative to this module, otherwise it is taken to mean an absolute path. For absolute paths, inclusion of the toplevel module's name in the path is optional. The toplevel module itself may be referred to as <root>.
The following lines demonstrate relative paths, and find the app[3] submodule and the gen submodule of the app[3] submodule of the module in question:
cModule *app = module->getModuleByPath(".app[3]"); // note leading dot cModule *gen = module->getModuleByPath(".app[3].gen");
Without the leading dot, the path is interpreted as absolute. The following lines both find the tcp submodule of host[2] in the network, regardless of the module on which the getModuleByPath() has been invoked.
cModule *tcp = module->getModuleByPath("Network.host[2].tcp"); cModule *tcp = module->getModuleByPath("host[2].tcp");
The parent module may be expressed with a caret:
cModule *parent = module->getModuleByPath("^"); // parent module cModule *tcp = module->getModuleByPath("^.tcp"); // sibling module cModule *other = module->getModuleByPath("^.^.host[1].tcp"); // two levels up, then...
To access all modules within a compound module, one can use cModule::SubmoduleIterator.
for (cModule::SubmoduleIterator it(module); !it.end(); it++) { cModule *submodule = *it; EV << submodule->getFullName() << endl; }
To determine the module at the other end of a connection, use cGate's getPreviousGate(), getNextGate() and getOwnerModule() methods. An example:
cModule *neighbour = gate("out")->getNextGate()->getOwnerModule();
For input gates, use getPreviousGate() instead of getNextGate().
The endpoints of the connection path are returned by the getPathStartGate() and getPathEndGate() cGate methods. These methods follow the connection path by repeatedly calling getPreviousGate() and getNextGate(), respectively, until they arrive at a nullptr. An example:
cModule *peer = gate("out")->getPathEndGate()->getOwnerModule();
In some simulation models, there might be modules which are too tightly coupled for message-based communication to be efficient. In such cases, the solution might be calling one simple module's public C++ methods from another module.
Simple modules are C++ classes, so normal C++ method calls will work. Two issues need to be mentioned, however:
Typically, the called module is in the same compound module as the caller, so the getParentModule() and getSubmodule() methods of cModule can be used to get a cModule* pointer to the called module. (Further ways to obtain the pointer are described in the section [4.11].) The cModule* pointer then has to be cast to the actual C++ class of the module, so that its methods become visible.
This makes the following code:
cModule *targetModule = getParentModule()->getSubmodule("foo"); Foo *target = check_and_cast<Foo *>(targetModule); target->doSomething();
The check_and_cast<>() template function on the second line
is part of OMNeT++. It performs a standard C++ dynamic_cast, and checks
the result: if it is nullptr, check_and_cast raises an OMNeT++ error.
Using check_and_cast saves you from writing error checking
code: if targetModule from the first line is nullptr because
the submodule named "foo" was not found, or if that
module is actually not of type Foo, an exception is thrown
from check_and_cast with an appropriate error message.
The second issue is how to let the simulation kernel know that a method call across modules is taking place. Why is this necessary in the first place? First, the simulation kernel always has to know which module's code is currently executing, in order for ownership handling and other internal mechanisms to work correctly. Second, the Qtenv simulation GUI can animate method calls, but to be able to do that, it needs to know about them. Third, method calls are also recorded in the event log.
The solution is to add the Enter_Method() or Enter_Method_Silent() macro at the top of the methods that may be invoked from other modules. These calls perform context switching, and, in case of Enter_Method(), notify the simulation GUI so that animation of the method call can take place. Enter_Method_Silent() does not animate the method call, but otherwise it is equivalent Enter_Method(). Both macros accept a printf()-like argument list (it is optional for Enter_Method_Silent()), which should produce a string with the method name and the actual arguments as much as practical. The string is displayed in the animation (Enter_Method() only) and recorded into the event log.
void Foo::doSomething() { Enter_Method("doSomething()"); ... }
Certain simulation scenarios require the ability to dynamically create and destroy modules. For example, simulating the arrival and departure of new users in a mobile network may be implemented in terms of adding and removing modules during the course of the simulation. Loading and instantiating network topology (i.e. nodes and links) from a data file is another common technique enabled by dynamic module (and link) creation.
OMNeT++ allows both simple and compound modules to be created at runtime. When instantiating a compound module, its full internal structure (submodules and internal connections) is reproduced.
Once created and started, dynamic modules aren't any different from “static” modules.
To understand how dynamic module creation works, you have to know a bit about how OMNeT++ normally instantiates modules. Each module type (class) has a corresponding factory object of the class cModuleType. This object is created under the hood by the Define_Module() macro, and it has a factory method which can instantiate the module class (this function basically only consists of a return new <moduleclass>(...) statement).
The cModuleType object can be looked up by its name string (which is the same as the module class name). Once you have its pointer, it is possible to call its factory method and create an instance of the corresponding module class -- without having to include the C++ header file containing module's class declaration into your source file.
The cModuleType object also knows what gates and parameters the given module type has to have. (This info comes from NED files.)
Simple modules can be created in one step. For a compound module, the situation is more complicated, because its internal structure (submodules, connections) may depend on parameter values and gate vector sizes. Thus, for compound modules it is generally required to first create the module itself, second, set parameter values and gate vector sizes, and then call the method that creates its submodules and internal connections.
As you know already, simple modules with activity() need a starter message. For statically created modules, this message is created automatically by OMNeT++, but for dynamically created modules, you have to do this explicitly by calling the appropriate functions.
Calling initialize() has to take place after insertion of the starter messages, because the initializing code may insert new messages into the FES, and these messages should be processed after the starter message.
The first step is to find the factory object. The cModuleType::get() function expects a fully qualified NED type name, and returns the factory object:
cModuleType *moduleType = cModuleType::get("foo.nodes.WirelessNode");
The return value does not need to be checked for nullptr, because the function raises an error if the requested NED type is not found. (If this behavior is not what you need, you can use the similar cModuleType::find() function, which returns nullptr if the type was not found.)
cModuleType has a createScheduleInit(const char *name, cModule *parentmod) % don't break this line (for html) convenience function to get a module up and running in one step.
cModule *mod = moduleType->createScheduleInit("node", this);
createScheduleInit() performs the following steps: create(), finalizeParameters(), buildInside(), scheduleStart(now) and callInitialize().
This method can be used for both simple and compound modules. Its applicability is somewhat limited, however: because it does everything in one step, you do not have the chance to set parameters or gate sizes, and to connect gates before initialize() is called. (initialize() expects all parameters and gates to be in place and the network fully built when it is called.) Because of the above limitation, this function is mainly useful for creating basic simple modules.
If the createScheduleInit() all-in-one method is not applicable, one needs to use the full procedure. It consists of five steps:
Each step (except for Step 3.) can be done with one line of code.
See the following example, where Step 3 is omitted:
// find factory object cModuleType *moduleType = cModuleType::get("foo.nodes.WirelessNode"); // create (possibly compound) module and build its submodules (if any) cModule *module = moduleType->create("node", this); module->finalizeParameters(); module->buildInside(); // create activation message module->scheduleStart(simTime());
If you want to set up parameter values or gate vector sizes (Step 3.), the code goes between the create() and buildInside() calls:
// create cModuleType *moduleType = cModuleType::get("foo.nodes.WirelessNode"); cModule *module = moduleType->create("node", this); // set up parameters and gate sizes before we set up its submodules module->par("address") = ++lastAddress; module->finalizeParameters(); module->setGateSize("in", 3); module->setGateSize("out", 3); // create internals, and schedule it module->buildInside(); module->scheduleStart(simTime());
To delete a module dynamically, use cModule's deleteModule() member function:
module->deleteModule();
If the module was a compound module, this involves recursively deleting all its submodules. An activity()-based simple module can also delete itself; in that case, the deleteModule() call does not return to the caller.
When deleteModule() is called on a compound module, individual modules under the compound module are notified by calling their preDelete() member functions before any change is actually made.
This notification can be quite useful when the compound module contains modules that hold pointers to each other, necessitated by their complex interactions via C++ method calls. With such modules, destruction can be tricky: given a sufficently complex control flow involving cascading cross-module method calls and signal listeners, it is actually quite easy to accidentally invoke a method on a module that has already been deleted at that point, resulting in a crash. (Note that destructors of collaborating modules cannot rely on being invoked in any particular order, because that order is determined factors, e.g. submodule order in NED, which are out the of control of the C++ code.)
preDelete() is a cComponent virtual method that, similar to handleMessage() and initialize(), is intended for being overridden by the user. When a compound module is deleted, deleteModule() first invokes preDelete() recursively on the submodule tree, and only starts deleting modules after that. This gives a chance to modules that override preDelete() to set pointers to collaborating modules to nullptr, or otherwise ensure that nothing bad will happen once modules start being deleted.
preDelete() receives in an argument the pointer of the module on which deleteModule() was invoked. This allows the module to tell apart cases when e.g. it is deleted in itself, or as part of a larger unit.
An example:
void Foo::preDelete(cComponent *root) { barModule = nullptr; }
opp_component_ptr<T> offers an answer to a related problem: how to detect when a module we have a pointer to is deleted, so that we no longer try to access it.
opp_component_ptr<T> is a smart pointer that points to a cComponent object (i.e. a module or a channel), and automatically becomes nullptr when the referenced object is deleted. It is a non-owning (“weak”) pointer, i.e. the pointer going out of scope has no effect on the referenced object.
In practice, one would replace bare pointers in the code (for example, Foo*) with opp_component_ptr<Foo> smart pointers, and test before accessing the other module that the pointer is still valid.
An example:
opp_component_ptr<Foo> fooModule; // as class member if (fooModule) fooModule->doSomething(); // or: obtain a bare pointer for multiple use if (Foo *fooPtr = fooModule.get()) { fooPtr->doSomething(); fooPtr->doSomethingElse(); }
finish() is called for all modules at the end of the simulation, no matter how the modules were created. If a module is dynamically deleted before that, finish() will not be invoked (deleteModule() does not do it). However, you can still manually invoke it before deleteModule().
You can use the callFinish() function to invoke finish()
(It is not a good idea to invoke finish() directly). If you are
deleting a compound module, callFinish() will recursively invoke
finish() for all submodules, and if you are deleting a simple
module from another module, callFinish() will do the context switch
for the duration of the call.
Example:
mod->callFinish(); mod->deleteModule();
Connections can be created using cGate's connectTo() method. connectTo() should be invoked on the source gate of the connection, and expects the destination gate pointer as an argument. The use of the words source and destination correspond to the direction of the arrow in NED files.
srcGate->connectTo(destGate);
connectTo() also accepts a channel object (cChannel*) as an additional, optional argument. Similarly to modules, channels can be created using their factory objects that have the type cChannelType:
cGate *outGate, *inGate; ... // find factory object and create a channel cChannelType *channelType = cChannelType::get("foo.util.Channel"); cChannel *channel = channelType->create("channel"); // create connecting outGate->connectTo(inGate, channel);
The channel object will be owned by the source gate of the connection, and one cannot reuse the same channel object with several connections.
Instantiating one of the built-in channel types (cIdealChannel, cDelayChannel or cDatarateChannel) is somewhat simpler, because those classes have static create() factory functions, and the step of finding the factory object can be spared. Alternatively, one can use cChannelType's createIdealChannel(), createDelayChannel() and createDatarateChannel() static methods.
The channel object may need to be parameterized before using it for a connection. For example, cDelayChannel has a setDelay() method, and cDatarateChannel has setDelay(), setDatarate(), setBitErrorRate() and setPacketErrorRate().
An example that sets up a channel with a datarate and a delay between two modules:
cDatarateChannel *datarateChannel = cDatarateChannel::create("channel"); datarateChannel->setDelay(0.001); datarateChannel->setDatarate(1e9); outGate->connectTo(inGate, datarateChannel);
Finally, here is a more complete example that creates two modules and connects them in both directions:
cModuleType *moduleType = cModuleType::get("TicToc"); cModule *a = modtype->createScheduleInit("a", this); cModule *b = modtype->createScheduleInit("b", this); a->gate("out")->connectTo(b->gate("in")); b->gate("out")->connectTo(a->gate("in"));
The disconnect() method of cGate can be used to remove connections. This method has to be invoked on the source side of the connection. It also destroys the channel object associated with the connection, if one has been set.
srcGate->disconnect();
This section describes simulation signals, or signals for short. Signals are a versatile concept that first appeared in OMNeT++ 4.1.
Simulation signals can be used for:
Signals are emitted by components (modules and channels). Signals propagate on the module hierarchy up to the root. At any level, one can register listeners, that is, objects with callback methods. These listeners will be notified (their appropriate methods called) whenever a signal value is emitted. The result of upwards propagation is that listeners registered at a compound module can receive signals from all components in that submodule tree. A listener registered at the system module can receive signals from the whole simulation.
Signals are identified by signal names (i.e. strings), but for efficiency, at runtime we use dynamically assigned numeric identifiers (signal IDs, typedef'd as simsignal_t). The mapping of signal names to signal IDs is global, so all modules and channels asking to resolve a particular signal name will get back the same numeric signal ID.
Listeners can subscribe to signal names or IDs, regardless of their source. For example, if two different and unrelated module types, say Queue and Buffer, both emit a signal named "length", then a listener that subscribes to "length" at some higher compound module will get notifications from both Queue and Buffer module instances. The listener can still look at the source of the signal if it wants to distinguish the two (it is available as a parameter to the callback function), but the signals framework itself does not have such a feature.
When a signal is emitted, it can carry a value with it. There are multiple overloaded versions of the emit() method for different data types, and also overloaded receiveSignal() methods in listeners. The signal value can be of selected primitive types, or an object pointer; anything that is not feasible to emit as a primitive type may be wrapped into an object, and emitted as such.
Even when the signal value is of a primitive type, it is possible to convey extra information to listeners via an additional details object, which an optional argument of emit().
The implementation of signals is based on the following assumptions:
These goals have been achieved in the 4.1 version with the following implementation. First, the data structure that used to store listeners in components is dynamically allocated, so if there are no listeners, the per-component overhead is only the size of the pointer (which will be nullptr then).
Second, additionally there are two bitfields in every component that store
which one of the first 64 signals (IDs 0..63) have local listeners and
listeners in ancestor modules.
Signal-related methods are declared on cComponent, so they are available for both cModule and cChannel.
Signals are identified by names, but internally numeric signal IDs are used for efficiency. The registerSignal() method takes a signal name as parameter, and returns the corresponding simsignal_t value. The method is static, illustrating the fact that signal names are global. An example:
simsignal_t lengthSignalId = registerSignal("length");
The getSignalName() method (also static) does the reverse: it accepts a simsignal_t, and returns the name of the signal as const char * (or nullptr for invalid signal handles):
const char *signalName = getSignalName(lengthSignalId); // --> "length"
The emit() family of functions emit a signal from the module or channel. emit() takes a signal ID (simsignal_t) and a value as parameters:
emit(lengthSignalId, queue.length());
The value can be of type bool, long, double, simtime_t, const char *, or (const) cObject *. Other types can be cast into one of these types, or wrapped into an object subclassed from cObject.
emit() also has an extra, optional object pointer argument named
details, with the type cObject*. This argument may be used
to convey to listeners extra information.
When there are no listeners, the runtime cost of emit() is usually minimal. However, if producing a value has a significant runtime cost, then the mayHaveListeners() or hasListeners() method can be used to check beforehand whether the given signal has any listeners at all -- if not, producing the value and emitting the signal can be skipped.
Example usage:
if (mayHaveListeners(distanceToTargetSignal)) { double d = sqrt((x-targetX)*(x-targetX) + (y-targetY)*(y-targetY)); emit(distanceToTargetSignal, d); }
The mayHaveListeners() method is very efficient (a constant-time operation), but may return false positive. In contrast, hasListeners() will search up to the top of the module tree if the answer is not cached, so it is generally slower. We recommend that you take into account the cost of producing notification information when deciding between mayHaveListeners() and hasListeners().
Since OMNeT++ 4.4, signals can be declared in NED files for documentation purposes, and OMNeT++ can check that only declared signals are emitted, and that they actually conform to the declarations (with regard to the data type, etc.)
The following example declares a queue module that emits a signal named queueLength:
simple Queue { parameters: @signal[queueLength](type=long); ... }
Signals are declared with the @signal property on the module or channel that emits it. (NED properties are described in [3.12]). The property index corresponds to the signal name, and the property's body may declare various attributes of the signal; currently only the data type is supported.
The type property key is optional; when present, its value should be bool, long, unsigned long, double, simtime_t, string, or a registered class name optionally followed by a question mark. Classes can be registered using the Register_Class() or Register_Abstract_Class() macros; these macros create a cObjectFactory instance, and the simulation kernel will call cObjectFactory's isInstance() method to check that the emitted object is really a subclass of the declared class. isInstance() just wraps a C++ dynamic_cast.)
A question mark after the class name means that the signal is allowed to emit nullptr pointers. For example, a module named PPP may emit the frame (packet) object every time it starts transmiting, and emit nullptr when the transmission is completed:
simple PPP { parameters: @signal[txFrame](type=PPPFrame?); // a PPPFrame or nullptr ... }
The property index may contain wildcards, which is important for declaring signals whose names are only known at runtime. For example, if a module emits signals called session-1-seqno, session-2-seqno, session-3-seqno, etc., those signals can be declared as:
@signal[session-*-seqno]();
Starting with OMNeT++ 5.0, signal checking is turned on by default when the simulation kernel is compiled in debug mode, requiring all signals to be declared with @signal. (It is turned off in release mode simulation kernels due to performance reasons.)
If needed, signal checking can be disabled with the check-signals configuration option:
check-signals = false
When emitting a signal with a cObject* pointer, you can pass as data an object that you already have in the model, provided you have a suitable object at hand. However, it is often necessary to declare a custom class to hold all the details, and fill in an instance just for the purpose of emitting the signal.
The custom notification class must be derived from cObject. We recommend that you also add noncopyable as a base class, because then you don't need to write a copy constructor, assignment operator, and dup() function, sparing some work. When emitting the signal, you can create a temporary object, and pass its pointer to the emit() function.
An example of custom notification classes are the ones associated with model change notifications (see [4.14.3]). For example, the data class that accompanies a signal that announces that a gate or gate vector is about to be created looks like this:
class cPreGateAddNotification : public cObject, noncopyable { public: cModule *module; const char *gateName; cGate::Type gateType; bool isVector; };
And the code that emits the signal:
if (hasListeners(PRE_MODEL_CHANGE)) { cPreGateAddNotification tmp; tmp.module = this; tmp.gateName = gatename; tmp.gateType = type; tmp.isVector = isVector; emit(PRE_MODEL_CHANGE, &tmp); }
The subscribe() method registers a listener for a signal. Listeners are objects that extend the cIListener class. The same listener object can be subscribed to multiple signals. subscribe() has two arguments: the signal and a pointer to the listener object:
cIListener *listener = ...; simsignal_t lengthSignalId = registerSignal("length"); subscribe(lengthSignalId, listener);
For convenience, the subscribe() method has a variant that takes the signal name directly, so the registerSignal() call can be omitted:
cIListener *listener = ...; subscribe("length", listener);
One can also subscribe at other modules, not only the local one. For example, in order to get signals from all parts of the model, one can subscribe at the system module level:
cIListener *listener = ...; getSimulation()->getSystemModule()->subscribe("length", listener);
The unsubscribe() method has the same parameter list as subscribe(), and unregisters the given listener from the signal:
unsubscribe(lengthSignalId, listener);
or
unsubscribe("length", listener);
It is an error to subscribe the same listener to the same signal twice.
It is possible to test whether a listener is subscribed to a signal, using the isSubscribed() method which also takes the same parameter list.
if (isSubscribed(lengthSignalId, listener)) { ... }
For completeness, there are methods for getting the list of signals that the component has subscribed to (getLocalListenedSignals()), and the list of listeners for a given signal (getLocalSignalListeners()). The former returns std::vector<simsignal_t>; the latter takes a signal ID (simsignal_t) and returns std::vector<cIListener*>.
The following example prints the number of listeners for each signal:
EV << "Signal listeners:\n"; std::vector<simsignal_t> signals = getLocalListenedSignals(); for (unsigned int i = 0; i < signals.size(); i++) { simsignal_t signalID = signals[i]; std::vector<cIListener*> listeners = getLocalSignalListeners(signalID); EV << getSignalName(signalID) << ": " << listeners.size() << " signals\n"; }
Listeners are objects that subclass from the cIListener class, which declares the following methods:
class cIListener { public: virtual ~cIListener() {} virtual void receiveSignal(cComponent *src, simsignal_t id, bool value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, intval_t value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, uintval_t value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, double value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, simtime_t value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, const char *value, cObject *details) = 0; virtual void receiveSignal(cComponent *src, simsignal_t id, cObject *value, cObject *details) = 0; virtual void finish(cComponent *component, simsignal_t id) {} virtual void subscribedTo(cComponent *component, simsignal_t id) {} virtual void unsubscribedFrom(cComponent *component, simsignal_t id) {} };
This class has a number of virtual methods:
Since cIListener has a large number of pure virtual methods, it is more convenient to subclass from cListener, a do-nothing implementation instead. It defines finish(), subscribedTo() and unsubscribedFrom() with an empty body, and the receiveSignal() methods with a bodies that throw a "Data type not supported" error. You can redefine the receiveSignal() method(s) whose data type you want to support, and signals emitted with other (unexpected) data types will result in an error instead of going unnoticed.
The order in which listeners will be notified is undefined (it is not necessarily the same order in which listeners were subscribed.)
When a component (module or channel) is deleted, it automatically unsubscribes (but does not delete) the listeners it has. When a module is deleted, it first unsubscribes all listeners from all modules and channels in its submodule tree before starting to recursively delete the modules and channels themselves.
When a listener is deleted, it automatically unsubscribes from all components
it is subscribed to.
In simulation models it is often useful to hold references to other modules, a connecting channel or other objects, or to cache information derived from the model topology. However, such pointers or data may become invalid when the model changes at runtime, and need to be updated or recalculated. The problem is how to get notification that something has changed in the model.
The solution is, of course, signals. OMNeT++ has two built-in signals, PRE_MODEL_CHANGE and POST_MODEL_CHANGE (these macros are simsignal_t values, not names) that are emitted before and after each model change.
Pre/post model change notifications are emitted with data objects that carry the details of the change. The data classes are:
They all subclass from cModelChangeNotification, which is of course a cObject. Inside the listener, you can use dynamic_cast<> to figure out what notification arrived.
An example listener that prints a message when a module is deleted:
class MyListener : public cListener { ... }; void MyListener::receiveSignal(cComponent *src, simsignal_t id, cObject *value, cObject *details) { if (dynamic_cast<cPreModuleDeleteNotification *>(value)) { cPreModuleDeleteNotification *data = (cPreModuleDeleteNotification *)value; EV << "Module " << data->module->getFullPath() << " is about to be deleted\n"; } }
If you'd like to get notification about the deletion of any module, you need to install the listener on the system module:
getSimulation()->getSystemModule()->subscribe(PRE_MODEL_CHANGE, listener);
One use of signals is to expose variables for result collection without telling where, how, and whether to record them. With this approach, modules only publish the variables, and the actual result recording takes place in listeners. Listeners may be added by the simulation framework (based on the configuration), or by other modules (for example by dedicated result collection modules).
The signals approach allows for several possibilities:
With the signals approach the above goals can be fulfilled.
In order to record simulation results based on signals, one must add @statistic properties to the simple module's (or channel's) NED definition. A @statistic property defines the name of the statistic, which signal(s) are used as input, what processing steps are to be applied to them (e.g. smoothing, filtering, summing, differential quotient), and what properties are to be recorded (minimum, maximum, average, etc.) and in which form (vector, scalar, histogram). Record items can be marked optional, which lets you denote a “default” and a more comprehensive “all” result set to be recorded; the list of record items can be further tweaked from the configuration. One can also specify a descriptive name (“title”) for the statistic, and also a measurement unit.
The following example declares a queue module with a queue length statistic:
simple Queue { parameters: @statistic[queueLength](record=max,timeavg,vector?); gates: input in; output out; }
As you can see, statistics are represented with indexed NED properties (see [3.12]). The property name is always statistic, and the index (here, queueLength) is the name of the statistic. The property value, that is, everything inside the parentheses, carries hints and extra information for recording.
The above @statistic declaration assumes that module's C++ code emits the queue's updated length as signal queueLength whenever elements are inserted into the queue or are removed from it. By default, the maximum and the time average of the queue length will be recorded as scalars. One can also instruct the simulation (or parts of it) to record “all” results; this will turn on optional record items, those marked with a question mark, and then the queue lengths will also be recorded into an output vector.
In the above example, the signal to be recorded was taken from the statistic name. When that is not suitable, the source property key lets you specify a different signal as input for the statistic. The following example assumes that the C++ code emits a qlen signal, and declares a queueLength statistic based on that:
simple Queue { parameters: @signal[qlen](type=int); // optional @statistic[queueLength](source=qlen; record=max,timeavg,vector?); ... }
Note that beyond the source=qlen property key we have also added a signal declaration (@signal property) for the qlen signal. Declaring signals is currently optional and in fact @signal properties are currently ignored by the system, but it is a good practice nevertheless.
It is also possible to apply processing to a signal before recording it. Consider the following example:
@statistic[dropCount](source=count(drop); record=last,vector?);
This records the total number of packet drops as a scalar, and optionally the number of packets dropped in the function of time as a vector, provided the C++ code emits a drop signal every time a packet is dropped. The value and even the data type of the drop signal is indifferent, because only the number of emits will be counted. Here, count() is a result filter.
Another example:
@statistic[droppedBytes](source=sum(packetBytes(pkdrop)); record=last, vector?);
This example assumes that the C++ code emits a pkdrop signal with a packet (cPacket* pointer) as a value. Based on that signal, it records the total number of bytes dropped (as a scalar, and optionally as a vector too). The packetBytes() filter extracts the number of bytes from each packet using cPacket's getByteLength() method, and the sum() filter, well, sums them up.
Arithmetic expressions can also be used. For example, the following line computes the number of dropped bytes using the packetBits() filter.
@statistic[droppedBytes](source=sum(8*packetBits(pkdrop)); record=last, vector?);
The source can also combine multiple signals in an arithmetic expression:
@statistic[dropRate](source=count(drop)/count(pk); record=last,vector?);
When multiple signals are used, a value arriving on either signal will result in one output value. The computation will use the last values of the other signals (sample-hold interpolation). One limitation regarding multiple signals is that the same signal cannot occur twice, because it would cause glitches in the output.
Record items may also be expressions and contain filters. For example, the statistic below is functionally equivalent to one of the above examples: it also computes and records as scalar and as vector the total number of bytes dropped, using a cPacket*-valued signal as input; however, some of the computations have been shifted into the recorder part.
@statistic[droppedBytes](source=packetBits(pkdrop); record=last(8*sum), vector(8*sum)?);
The following keys are understood in @statistic properties:
The following table contains the list of predefined result filters. All filters in the table output a value for each input value.
Filter | Description |
count | Computes and outputs the count of values received so far. |
sum | Computes and outputs the sum of values received so far. |
min | Computes and outputs the minimum of values received so far. |
max | Computes and outputs the maximum of values received so far. |
mean | Computes and outputs the average (sum / count) of values received so far. |
timeavg | Regards the input values and their timestamps as a step function (sample-hold style), and computes and outputs its time average (integral divided by duration). |
constant0 | Outputs a constant 0 for each received value (independent of the value). |
constant1 | Outputs a constant 1 for each received value (independent of the value). |
packetBits | Expects cPacket pointers as value, and outputs the bit length for each received one. Non-cPacket values are ignored. |
packetBytes | Expects cPacket pointers as value, and outputs the byte length for each received one. Non-cPacket values are ignored. |
sumPerDuration | For each value, computes the sum of values received so far, divides it by the duration, and outputs the result. |
removeRepeats | Removes repeated values, i.e. discards values that are the same as the previous value. |
The list of predefined result recorders:
Recorder | Description |
last | Records the last value into an output scalar. |
count | Records the count of the input values into an output scalar; functionally equivalent to last(count) |
sum | Records the sum of the input values into an output scalar (or zero if there was none); functionally equivalent to last(sum) |
min | Records the minimum of the input values into an output scalar (or positive infinity if there was none); functionally equivalent to last(min) |
max | Records the maximum of the input values into an output scalar (or negative infinity if there was none); functionally equivalent to last(max) |
mean | Records the mean of the input values into an output scalar (or NaN if there was none); functionally equivalent to last(mean) |
timeavg | Regards the input values with their timestamps as a step function (sample-hold style), and records the time average of the input values into an output scalar; functionally equivalent to last(timeavg) |
stats | Computes basic statistics (count, mean, std.dev, min, max) from the input values, and records them into the output scalar file as a statistic object. |
histogram | Computes a histogram and basic statistics (count, mean, std.dev, min, max) from the input values, and records the result into the output scalar file as a histogram object. |
vector | Records the input values with their timestamps into an output vector. |
The names of recorded result items will be formed by concatenating the statistic name and the recording mode with a colon between them: "<statisticName>:<recordingMode>".
Thus, the following statistics
@statistic[dropRate](source=count(drop)/count(pk); record=last,vector?); @statistic[droppedBytes](source=packetBytes(pkdrop); record=sum,vector(sum)?);
will produce the following scalars: dropRate:last, droppedBytes:sum, and the following vectors: dropRate:vector, droppedBytes:vector(sum).
All property keys (except for record) are recorded as result attributes into the vector file or scalar file. The title property will be tweaked a little before recording: the recording mode will be added after a comma, otherwise all result items saved from the same statistic would have exactly the same name.
Example: "Dropped Bytes, sum", "Dropped Bytes, vector(sum)"
It is allowed to use other property keys as well, but they won't be interpreted by the OMNeT++ runtime or the result analysis tool.
To fully understand source and record, it will be useful to see how result recording is set up.
When a module or channel is created in the simulation, the OMNeT++ runtime examines the @statistic properties on its NED declaration, and adds listeners on the signals they mention as input. There are two kinds of listeners associated with result recording: result filters and result recorders. Result filters can be chained, and at the end of the chain there is always a recorder. So, there may be a recorder directly subscribed to a signal, or there may be a chain of one or more filters plus a recorder. Imagine it as a pipeline, or rather a “pipe tree”, where the tree roots are signals, the leaves are result recorders, and the intermediate nodes are result filters.
Result filters typically perform some processing on the values they receive on their inputs (the previous filter in the chain or directly a signal), and propagate them to their output (chained filters and recorders). A filter may also swallow (i.e. not propagate) values. Recorders may write the received values into an output vector, or record output scalar(s) at the end of the simulation.
Many operations exist both in filter and recorder form. For example, the sum filter propagates the sum of values received on its input to its output; and the sum recorder only computes the the sum of received values in order to record it as an output scalar on simulation completion.
The next figure illustrates which filters and recorders are created and how they are connected for the following statistics:
@statistic[droppedBits](source=8*packetBytes(pkdrop); record=sum,vector(sum));
It is often convenient to have a module record statistics per session, per connection, per client, etc. One way of handling this use case is registering signals dynamically (e.g. session1-jitter, session2-jitter, ...), and setting up @statistic-style result recording on each.
The NED file would look like this:
@signal[session*-jitter](type=simtime_t); // note the wildcard @statisticTemplate[sessionJitter](record=mean,vector?);
In the C++ code of the module, you need to register each new signal with registerSignal(), and in addition, tell OMNeT++ to set up statistics recording for it as described by the @statisticTemplate property. The latter can be achieved by calling getEnvir()->addResultRecorders().
char signalName[32]; sprintf(signalName, "session%d-jitter", sessionNum); simsignal_t signal = registerSignal(signalName); char statisticName[32]; sprintf(statisticName, "session%d-jitter", sessionNum); cProperty *statisticTemplate = getProperties()->get("statisticTemplate", "sessionJitter"); getEnvir()->addResultRecorders(this, signal, statisticName, statisticTemplate);
In the @statisticTemplate property, the source key will be ignored (because the signal given as parameter will be used as source). The actual name and index of property will also be ignored. (With @statistic, the index holds the result name, but here the name is explicitly specified in the statisticName parameter.)
When multiple signals are recorded using a common @statisticTemplate property, you'll want the titles of the recorded statistics to differ for each signal. This can be achieved by using dollar variables in the title key of @statisticTemplate. The following variables are available:
For example, if the statistic name is "conn:host1-to-host4(3):bytesSent", and the title is "bytes sent in connection $namePart2", it will become "bytes sent in connection host1-to-host4(3)".
As an alternative to @statisticTemplate and addResultRecorders(), it is also possible to set up result recording programmatically, by creating and attaching result filters and recorders to the desired signals.
The following code example sets up recording to an output vector after removing duplicate values, and is essentially equivalent to the following @statistic line:
@statistic[queueLength](source=qlen; record=vector(removeRepeats); title="Queue Length"; unit=packets);
The C++ code:
simsignal_t signal = registerSignal("qlen"); cResultFilter *warmupFilter = cResultFilterType::get("warmup")->create(); cResultFilter *removeRepeatsFilter = cResultFilterType::get("removeRepeats")->create(); cResultRecorder *vectorRecorder = cResultRecorderType::get("vector")->create(); opp_string_map *attrs = new opp_string_map; (*attrs)["title"] = "Queue Length"; (*attrs)["unit"] = "packets"; cResultRecorder::Context ctx { this, "queueLength", "vector", nullptr, attrs}; vectorRecorder->init(&ctx); subscribe(signal, warmupFilter); warmupFilter->addDelegate(removeRepeatsFilter); removeRepeatsFilter->addDelegate(vectorRecorder);
Emitting signals for statistical purposes does not differ much from emitting signals for any other purpose. Statistic signals are primarily expected to contain numeric values, so the overloaded emit() functions that take long, double and simtime_t are going to be the most useful ones.
Emitting with timestamp. The emitted values are associated with the current simulation time. At times it might be desirable to associate them with a different timestamp, in much the same way as the recordWithTimestamp() method of cOutVector (see [7.10.1]) does. For example, assume that you want to emit a signal at the start of every successful wireless frame reception. However, whether any given frame reception is going to be successful can only be known after the reception has completed. Hence, values can only be emitted at reception completion, and need to be associated with past timestamps.
To emit a value with a different timestamp, an object containing a (timestamp, value) pair needs to be filled in, and emitted using the emit(simsignal_t, cObject *) method. The class is called cTimestampedValue, and it simply has two public data members called time and value, with types simtime_t and double. It also has a convenience constructor taking these two values.
An example usage:
simtime_t frameReceptionStartTime = ...; double receivePower = ...; cTimestampedValue tmp(frameReceptionStartTime, receivePower); emit(recvPowerSignal, &tmp);
If performance is critical, the cTimestampedValue object may be
made a class member or a static variable to eliminate object
construction/destruction time.
Timestamps must be monotonically increasing.
Emitting non-numeric values. Sometimes it is practical to have multi-purpose signals, or to retrofit an existing non-statistical signal so that it can be recorded as a result. For this reason, signals having non-numeric types (that is, const char * and cObject *) may also be recorded as results. Wherever such values need to be interpreted as numbers, the following rules are used by the built-in result recording listeners:
cITimestampedValue is a C++ interface that may be used as an additional base class for any class. It is declared like this:
class cITimestampedValue { public: virtual ~cITimestampedValue() {} virtual double getSignalValue(simsignal_t signalID) = 0; virtual simtime_t getSignalTime(simsignal_t signalID); };
getSignalValue() is pure virtual (it must return some value), but getSignalTime() has a default implementation that returns the current simulation time. Note the signalID argument that allows the same class to serve multiple signals (i.e. to return different values for each).
You can define your own result filters and recorders in addition to the built-in ones. Similar to defining modules and new NED functions, you have to write the implementation in C++, and then register it with a registration macro to let OMNeT++ know about it. The new result filter or recorder can then be used in the source= and record= attributes of @statistic properties just like the built-in ones.
Result filters must be subclassed from cResultFilter or from one of its more specific subclasses cNumericResultFilter and cObjectResultFilter. The new result filter class needs to be registered using the Register_ResultFilter(NAME, CLASSNAME) macro.
Similarly, a result recorder must subclass from the cResultRecorder or the more specific cNumericResultRecorder class, and be registered using the Register_ResultRecorder(NAME, CLASSNAME) macro.
An example result filter implementation from the simulation runtime:
/** * Filter that outputs the sum of signal values divided by the measurement * interval (simtime minus warmup period). */ class SumPerDurationFilter : public cNumericResultFilter { protected: double sum; protected: virtual bool process(simtime_t& t, double& value, cObject *details); public: SumPerDurationFilter() {sum = 0;} }; Register_ResultFilter("sumPerDuration", SumPerDurationFilter); bool SumPerDurationFilter::process(simtime_t& t, double& value, cObject *) { sum += value; value = sum / (simTime() - getSimulation()->getWarmupPeriod()); return true; }
Messages are a central concept in OMNeT++. In the model, message objects represent events, packets, commands, jobs, customers or other kinds of entities, depending on the model domain.
Messages are represented with the cMessage class and its subclass cPacket. cPacket is used for network packets (frames, datagrams, transport packets, etc.) in a communication network, and cMessage is used for everything else. Users are free to subclass both cMessage and cPacket to create new types and to add data.
cMessage has the following fields; some are used by the simulation kernel, and others are provided for the convenience of the simulation programmer:
The cPacket class extends cMessage with fields that are useful for representing network packets:
The cMessage constructor accepts an object name and a message kind, both optional:
cMessage(const char *name=nullptr, short kind=0);
Descriptive message names can be very useful when tracing, debugging or demonstrating the simulation, so it is recommended to use them. Message kind is usually initialized with a symbolic constant (e.g. an enum value) which signals what the message object represents. Only positive values and zero can be used -- negative values are reserved for use by the simulation kernel.
The following lines show some examples of message creation:
cMessage *msg1 = new cMessage(); cMessage *msg2 = new cMessage("timeout"); cMessage *msg3 = new cMessage("timeout", KIND_TIMEOUT);
Once a message has been created, its basic data members can be set with the following methods:
void setName(const char *name); void setKind(short k); void setTimestamp(); void setTimestamp(simtime_t t); void setSchedulingPriority(short p);
The argument-less setTimeStamp() method is equivalent to setTimeStamp(simTime()).
The corresponding getter methods are:
const char *getName() const; short getKind() const; simtime_t getTimestamp() const; short getSchedulingPriority() const;
The getName()/setName() methods are inherited from a generic base class in the simulation library, cNamedObject.
Two more interesting methods:
bool isPacket() const; simtime_t getCreationTime() const;
The isPacket() method returns true if the particular message object is a subclass of cPacket, and false otherwise. As isPacket() is implemented as a virtual function that just contains a return false or a return true statement, it might be faster than calling dynamic_cast<cPacket*>.
The getCreationTime() method returns the creation time of the message. It is worthwhile to mention that with cloned messages (see dup() later), the creation time of the original message is returned and not the time of the cloning operation. This is particularly useful when modeling communication protocols, because many protocols clone the transmitted packages to be able to do retransmissions and/or segmentation/reassembly.
It is often necessary to duplicate a message or a packet, for example, to send one and keep a copy. Duplication can be done in the same way as for any other OMNeT++ object:
cMessage *copy = msg->dup();
The resulting message (or packet) will be an exact copy of the original
including message parameters and encapsulated messages, except for the
message ID field. The creation time field is also copied, so
for cloned messages getCreationTime() will return the creation
time of the original, not the time of the cloning operation.
When subclassing cMessage or cPacket, one needs to reimplement dup(). The recommended implementation is to delegate to the copy constructor of the new class:
class FooMessage : public cMessage { public: FooMessage(const FooMessage& other) {...} virtual FooMessage *dup() const {return new FooMessage(*this);} ... };
For generated classes (chapter [6]), this is taken care of automatically.
Every message object has a unique numeric message ID. It is normally used for identifying the message in a recorded event log file, but may occasionally be useful for other purposes as well. When a message is cloned (msg->dup()), the clone will have a different ID.
There is also another ID called tree ID. The tree ID is initialized to the message ID. However, when a message is cloned, the clone will retain the tree ID of the original. Thus, messages that have been created by cloning the same message or its clones will have the same tree ID. Message IDs are of the type long, which is is usually enough so that IDs remain unique during the simulation run (i.e. the counter does not wrap).
The methods for obtaining message IDs:
long getId() const; long getTreeId() const;
One of the main application areas of OMNeT++ is the simulation of telecommunication networks. Here, protocol layers are usually implemented as modules which exchange packets. Packets themselves are represented by messages subclassed from cPacket.
However, communication between protocol layers requires sending additional information to be attached to packets. For example, a TCP implementation sending down a TCP packet to IP will want to specify the destination IP address and possibly other parameters. When IP passes up a packet to TCP after decapsulation from the IP header, it will want to let TCP know at least the source IP address.
This additional information is represented by control info objects in OMNeT++. Control info objects have to be subclassed from cObject (a small footprint base class with no data members), and can be attached to any message. cMessage has the following methods for this purpose:
void setControlInfo(cObject *controlInfo); cObject *getControlInfo() const; cObject *removeControlInfo();
When a "command" is associated with the message sending (such as TCP OPEN, SEND, CLOSE, etc), the message kind field (getKind(), setKind() methods of cMessage) should carry the command code. When the command doesn't involve a data packet (e.g. TCP CLOSE command), a dummy packet (empty cMessage) can be sent.
An object set as control info via setControlInfo() will be owned by the message object. When the message is deallocated, the control info object is deleted as well.
The following methods return the sending and arrival times that correspond to the last sending of the message.
simtime_t getSendingTime() const; simtime_t getArrivalTime() const;
The following methods can be used to determine where the message came from and which gate it arrived on (or will arrive if it is currently scheduled or under way.) There are two sets of methods, one returning module/gate Ids, and the other returning pointers.
int getSenderModuleId() const; int getSenderGateId() const; int getArrivalModuleId() const; int getArrivalGateId() const; cModule *getSenderModule() const; cGate *getSenderGate() const; cModule *getArrivalModule() const; cGate *getArrivalGate() const;
There are further convenience functions to tell whether the message arrived on a specific gate given with id or with name and index.
bool arrivedOn(int gateId) const; bool arrivedOn(const char *gatename) const; bool arrivedOn(const char *gatename, int gateindex) const;
Display strings affect the message's visualization in graphical user interfaces like Qtenv. Message objects do not store a display string by default, but contain a getDisplayString() method that can be overridden in subclasses to return the desired string. The method:
const char *getDisplayString() const;
Since OMNeT++ version 5.1, cPacket's default getDisplayString() implementation is such so that a packet “inherits” the display string of its encapsulated packet, provided it has one. Thus, in the model of a network stack, the appearance of e.g. an application layer packet will be preserved even after multiple levels of encapsulation.
See section for more information on message display string syntax and possibilities.
Messages are often used to represent events internal to a module, such as a periodically firing timer to represent expiry of a timeout. A message is termed self-message when it is used in such a scenario -- otherwise self-messages are normal messages of class cMessage or a class derived from it.
When a message is delivered to a module by the simulation kernel, the isSelfMessage() method can be used to determine if it is a self-message; that is, whether it was scheduled with scheduleAt(), or sent with one of the send...() methods. The isScheduled() method returns true if the message is currently scheduled. A scheduled message can also be cancelled (cancelEvent()).
bool isSelfMessage() const; bool isScheduled() const;
The methods getSendingTime() and getArrivalTime() are also useful with self-messages: they return the time the message was scheduled and arrived (or will arrive; while the message is scheduled, arrival time is the time it will be delivered to the module).
cMessage contains a context pointer of type void*, which can be accessed by the following functions:
void setContextPointer(void *p); void *getContextPointer() const;
The context pointer can be used for any purpose by the simulation programmer. It is not used by the simulation kernel, and it is treated as a mere pointer (no memory management is done on it).
Intended purpose: a module which schedules several self-messages (timers) will need to identify a self-message when it arrives back to the module, ie. the module will have to determine which timer went off and what to do then. The context pointer can be made to point at a data structure kept by the module which can carry enough “context” information about the event.
The cPacket constructor is similar to the cMessage constructor, but it accepts an additional bit length argument:
cPacket(const char *name=nullptr, short kind=0, int64 bitLength=0);
The most important field cPacket has over cMessage is the message length. This field is kept in bits, but it can also be set/get in bytes. If the bit length is not a multiple of eight, the getByteLength() method will round it up.
void setBitLength(int64_t l); void setByteLength(int64_t l); void addBitLength(int64_t delta); void addByteLength(int64_t delta); int64_t getBitLength() const; int64_t getByteLength() const;
Another extra field is the bit error flag. It can be accessed with the following methods:
void setBitError(bool e); bool hasBitError() const;
In OMNeT++ protocol models, the protocol type is usually represented in the message subclass. For example, instances of class IPv6Datagram represent IPv6 datagrams and EthernetFrame represents Ethernet frames. The C++ dynamic_cast operator can be used to determine if a message object is of a specific protocol.
An example:
cMessage *msg = receive(); if (dynamic_cast<IPv6Datagram *>(msg) != nullptr) { IPv6Datagram *datagram = (IPv6Datagram *)msg; ... }
When a packet has been received, some information can be obtained about the transmission, namely the transmission duration and the is-reception-start flag. They are returned by the following methods:
simtime_t getDuration() const; bool isReceptionStart() const;
When modeling layered protocols of computer networks, it is commonly needed to encapsulate a packet into another. The following cPacket methods are associated with encapsulation:
void encapsulate(cPacket *packet); cPacket *decapsulate(); cPacket *getEncapsulatedPacket() const;
The encapsulate() function encapsulates a packet into another one. The length of the packet will grow by the length of the encapsulated packet. An exception: when the encapsulating (outer) packet has zero length, OMNeT++ assumes it is not a real packet but an out-of-band signal, so its length is left at zero.
A packet can only hold one encapsulated packet at a time; the second encapsulate() call will result in an error. It is also an error if the packet to be encapsulated is not owned by the module.
Decapsulation, that is, removing the encapsulated packet, is done by the decapsulate() method. decapsulate() will decrease the length of the packet accordingly, except if it was zero. If the length would become negative, an error occurs.
The getEncapsulatedPacket() function returns a pointer to the encapsulated packet, or nullptr if no packet is encapsulated.
Example usage:
cPacket *data = new cPacket("data"); data->setByteLength(1024); UDPPacket *udp = new UDPPacket("udp"); // subclassed from cPacket udp->setByteLength(8); udp->encapsulate(data); EV << udp->getByteLength() << endl; // --> 8+1024 = 1032
And the corresponding decapsulation code:
cPacket *payload = udp->decapsulate();
Since the 3.2 release, OMNeT++ implements reference counting of encapsulated packets, meaning that when a packet containing an encapsulated packet is cloned (dup()), the encapsulated packet will not be duplicated, only a reference count is incremented. Duplication of the encapsulated packet is deferred until decapsulate() actually gets called. If the outer packet is deleted without its decapsulate() method ever being called, then the reference count of the encapsulated packet is simply decremented. The encapsulated packet is deleted when its reference count reaches zero.
Reference counting can significantly improve performance, especially in LAN and wireless scenarios. For example, in the simulation of a broadcast LAN or WLAN, the IP, TCP and higher layer packets won't be duplicated (and then discarded without being used) if the MAC address doesn't match in the first place.
The reference counting mechanism works transparently. However, there is one implication: one must not change anything in a packet that is encapsulated into another! That is, getEncapsulatedPacket() should be viewed as if it returned a pointer to a read-only object (it returns a const pointer indeed), for quite obvious reasons: the encapsulated packet may be shared between several packets, and any change would affect those other packets as well.
The cPacket class does not directly support encapsulating more than one packet, but one can subclass cPacket or cMessage to add the necessary functionality.
Encapsulated packets can be stored in a fixed-size or a dynamically allocated array, or in a standard container like std::vector. In addition to storage, object ownership needs to be taken care of as well. The message class has to take ownership of the inserted messages, and release them when they are removed from the message. These tasks are done via the take() and drop() methods.
Here is an example that assumes that the class has an std::list member called messages for storing message pointers:
void MultiMessage::insertMessage(cMessage *msg) { take(msg); // take ownership messages.push_back(msg); // store pointer } void MultiMessage::removeMessage(cMessage *msg) { messages.remove(msg); // remove pointer drop(msg); // release ownership }
One also needs to provide an operator=() method to make sure that message objects are copied and duplicated properly. Section [7.13] covers requirements and conventions associated with deriving new classes in more detail.
When parameters or objects need to be added to a message, the preferred way to do that is via message definitions, described in chapter [6].
The cMessage class has an internal cArray object which can carry objects. Only objects that are derived from cObject can be attached. The addObject(), getObject(), hasObject(), removeObject() methods use the object's name (as returned by the getName() method) as the key to the array.
An example where the sender attaches an object, and the receiver checks for the object's existence and obtains a pointer to it:
// sender: cHistogram *histogram = new cHistogram("histogram"); msg->addObject(histogram); // receiver: if (msg->hasObject("histogram")) { cObject *obj = msg->getObject("histogram"); cHistogram *histogram = check_and_cast<cHistogram *>(obj); ... }
One needs to take care that names of the attached objects don't conflict with each other. Note that message parameters (cMsgPar, see next section) are also attached the same way, so their names also count.
When no objects are attached to a message (and getParList() is not invoked), the internal cArray object is not created. This saves both storage and execution time.
Non-cObject data can be attached to messages by wrapping them into cObject, for example into cMsgPar which has been designed expressly for this purpose. cMsgPar will be covered in the next section.
The preferred way of extending messages with new data fields is to use message definitions (see chapter [6]).
The old, deprecated way of adding new fields to messages is via attaching cMsgPar objects. There are several downsides of this approach, the worst being large memory and execution time overhead. cMsgPar's are heavy-weight and fairly complex objects themselves. It has been reported that using cMsgPar message parameters might account for a large part of execution time, sometimes as much as 80%. Using cMsgPar is also error-prone because cMsgPar objects have to be added dynamically and individually to each message object. In contrast, subclassing benefits from static type checking: if one mistypes the name of a field in the C++ code, the compiler can detect the mistake.
If one still needs cMsgPars for some reason, here is a short summary. At the sender side, one can add a new named parameter to the message with the addPar() member function, then set its value with one of the methods setBoolValue(), setLongValue(), setStringValue(), setDoubleValue(), setPointerValue(), setObjectValue(), and setXMLValue(). There are also overloaded assignment operators for the corresponding C/C++ types.
At the receiver side, one can look up the parameter object on the message by name and obtain a reference to it with the par() member function. hasPar() can be used to check first whether the message object has a parameter object with the given name. Then the value can be read with the methods boolValue(), longValue(), stringValue(), doubleValue(), pointerValue(), objectValue(), xmlValue(), or by using the provided overloaded type cast operators.
Example usage:
msg->addPar("destAddr"); msg->par("destAddr").setLongValue(168); ... long destAddr = msg->par("destAddr").longValue();
Or, using overloaded operators:
msg->addPar("destAddr"); msg->par("destAddr") = 168; ... long destAddr = msg->par("destAddr");
In practice, one needs to add various fields to cMessage or cPacket to make them useful. For example, when modeling communication networks, message/packet objects need to carry protocol header fields. Since the simulation library is written in C++, the natural way of extending cMessage/cPacket is via subclassing them. However, at least three items has to be added to the new class for each field (a private data member, a getter and a setter method) and the resulting class needs to integrate with the simulation framework, which means that writing the necessary C++ code can be a tedious and time-consuming task.
OMNeT++ offers a more convenient way called message definitions. Message definitions offer a compact syntax to describe message contents, and the corresponding C++ code is automatically generated from the definitions. When needed, the generated class can also be customized via subclassing. Even when the generated class needs to be heavily customized, message definitions can still save the programmer a great deal of manual work.
Let us begin with a simple example. Suppose that we need a packet type that carries a source and a destination address as well as a hop count. The corresponding C++ code can be generated from the following definition in a MyPacket.msg file:
packet MyPacket { int srcAddress; int destAddress; int remainingHops = 32; };
It is the task of the OMNeT++ message compiler, opp_msgc or opp_msgtool, to translate the definition into a C++ class that can be instantiated from C++ model code. The message compiler is normally invoked for .msg files automatically, as part of the build process.
When the message compiler processes MyPacket.msg, it creates two files: MyPacket_m.h and MyPacket_m.cc. The generated MyPacket_m.h will contain the following class declaration (abbreviated):
class MyPacket : public cPacket { protected: int srcAddress; int destAddress; int remainingHops = 32; public: MyPacket(const char *name=nullptr, short kind=0); MyPacket(const MyPacket& other); MyPacket& operator=(const MyPacket& other); virtual MyPacket *dup() const override {return new MyPacket(*this);} ... // field getter/setter methods virtual int getSrcAddress() const; virtual void setSrcAddress(int srcAddress); virtual int getDestAddress() const; virtual void setDestAddress(int destAddress); virtual int getRemainingHops() const; virtual void setRemainingHops(int remainingHops); };
As you can see, for each field the generated class contains a protected data member, and a public getter and a setter method. The names of the methods will begin with get and set, followed by the field name with its first letter converted to uppercase.
The MyPacket_m.cc file contains implementation of the generated MyPacket class as well as “reflection” code (see cClassDescriptor) that allows inspection of these data structures under graphical user interfaces like Qtenv. The MyPacket_m.cc file should be compiled and linked into the simulation; this is normally taken care of automatically.
In order to use the MyPacket class from a C++ source file, the generated header file needs to be included:
#include "MyPacket_m.h" ... MyPacket *pkt = new MyPacket("pkt"); pkt->setSrcAddress(localAddr); ...
Message files contain the following ingredients:
The following sections describe all of the above elements in detail.
As shown above, the message description language allows you to generate C++ data classes and structs from concise descriptions that have a syntax resembling C structs. The descriptions contain the choice of the base class (message descriptions only support single inheritance), the list of fields the class should have, and possibly various metadata annotations that e.g. control the details of the code generation.
A description starts with one of the packet, message, class, struct keywords. The first three are very similar: they all generate C++ classes, and only differ on the choice of the default base class (and related details such as the argument list of the constructor). The fourth one generates a plain (C-style) struct.
For packet, the default base class is cPacket; or if a base class is explicitly named, it must be a subclass of cPacket. Similarly, for message, the default base class is cMessage, or if a base class is specified, it must be a subclass of cMessage.
For class, the default is no base class. However, it is often a
good idea to choose cObject as a base class.
The base class is specified with the extends keyword. For example:
packet FooPacket extends PacketBase { ... };
The generated C++ class will look like this:
class FooPacket : public PacketBase { ... };
The generated class will have a constructor and also a copy constructor. An assignment operator (operator=()) and cloning method (dup()) will also be generated.
The argument list of the generated constructor depends on the base class. For classes derived from cMessage, it will accept an object name and message kind. For classes derived from cNamedObject, it will accept an object name. The arguments are optional (they have default values).
class FooPacket : public PacketBase { public: FooPacket(const char *name=nullptr, int kind=0); FooPacket(const FooPacket& other); FooPacket& operator=(const FooPacket& other); virtual FooPacket *dup() const; ...
Additional base classes can be added by listing them in the @implements class property.
Message definitions allow one to define C-style structs, “C-style” meaning “containing only data and no methods”. These structs can be useful as fields in message classes.
The syntax is similar to that of defining messages:
struct Place { int type; string description; double coords[3]; };
The generated struct has public data members, and no getter or setter methods. The following code is generated from the above definition:
// generated C++ struct Place { int type; omnetpp::opp_string description; double coords[3]; };
Note that string fields are generated with the opp_string C++ type, which is a minimalistic string class that wraps const char* and takes care of allocation/deallocation. It was chosen instead of std::string because of its significantly smaller memory footprint. (std::string is significantly larger than a const char* pointer because it also needs to store length and capacity information in some form.)
Inheritance is supported for structs:
struct Base { ... }; struct Extended extends Base { ... };
However, because a struct has no member functions, there are limitations:
An enum is declared with the enum keyword, using the following syntax:
enum PayloadType { NONE = 0; VOICE = 1; VIDEO = 2; DATA = 3; };
Enum values need to be unique.
The message compiler translates an enum into a normal C++ enum, plus also generates a descriptor that stores the symbolic names as strings. The latter makes it possible for Qtenv to display symbolic names for enum values.
Enums can be used in two ways. The first is simply to use the enum's name as field type:
packet FooPacket { PayloadType payloadType; };
The second way is to tag a field of the type int or any other integral type with the @enum property and the name of the enum, like so:
packet FooPacket { int16_t payloadType @enum(PayloadType); };
In the generated C++ code, the field will have the original type (in this case, int16_t). However, additional code generated by the message compiler will allow Qtenv to display the symbolic name of the field's value in addition to the numeric value.
Import directives are used to make definitions in one message file available to another one. Importing an MSG file makes the definitions in that file available to the file that imports it, but has no further side effect (and in particular, it will generate no C++ code).
To import a message file, use the import keyword followed by a name that identifies the message file within its project:
import inet.linklayer.common.MacAddress;
The import's parameter is interpreted as a relative file path (by replacing dots with slashes, and appending .msg), which is searched for in folders listed in the message import path, much like C/C++ include files are searched for in the compiler's include path, Python modules in the Python module search path, or NED files in the NED path.
The message import path can be be specified to the message compiler via a series of -I command-line options.
To place generated types into a namespace, add a namespace directive above the types in question:
namespace inet;
Hierarchical (nested) namespaces are declared using double colons in the namespace definition, much like nested namespace definitions introduced into C++ in version C++17.
namespace inet::ieee80211;
The above code will be translated into multiple nested namespaces in the C++ code:
namespace inet { namespace ieee80211 { ... }}
There can be multiple namespace directives in a message file. The effect of the namespace directive extends from the place of the directive until the next namespace directive or the end of the message file. Each namespace directive opens a completely new namespace, i.e. not a namespace within the previous one. An empty namespace directive (namespace;) returns to the global namespace. For example:
namespace foo::bar; class A {} // defines foo::bar::A namespace baz; class B {} // defines baz::B namespace; class C {} // defines ::C
Properties are metadata annotations of the syntax @name or @name(...) that may occur on file, class (packet, struct, etc.) definition, and field level. There are many predefined properties, and a large subset of them deal with the details of what C++ code to generate for the item they occur with. For example, @getter(getFoo) on a field requests that the generated getter function have the name getFoo.
Here is a syntax example. Note that class properties are placed in the fields list (fields and properties may be mixed in arbitrary order), and field properties are written after the field name.
@foo; class Foo { @customize(true); string value @getter(...) @setter(...) @hint("..."); }
Syntactically, the mandatory part of a property is the @ character followed by the property name. They are then optionally followed by an index and a parameter list. The index is a name in square brackets, and it is rarely used. The parameter list is enclosed in parentheses, and in theory it may contain a value list and key-valuelist pairs, but almost all properties expect to find just a single value there.
For boolean properties, the value may be true or false; if the value is missing, true is assumed. Thus, @customize is equivalent to @customize(true).
As a guard against mistyping property names, properties need to be declared before they can be used. Properties are declared using the @property property, with the name of the new property in the index, and the type and other attributes of the property in the parameter list. Examples for property declarations, including the declaration of @property itself, can be seed by listing the built-in definitions of the message compiler (opp_msgtool -h builtindefs).
The full list of properties understood by the message compiler and other OMNeT++ tools can be found in Appendix [24].
The following data types can be used for fields:
In addition, OMNeT++ class names such as simtime_t and cMessage are also made available without the need to import anything. These names are accepted both with and without spelling out the omnetpp namespace name.
Numeric fields are initialized to zero, booleans to false, and string fields to the empty string.
A scalar field is one that holds a single value. It is defined by specifying the data type and the field name, for example:
int timeToLive;
For each field, the generated class will have a protected data member, and a public getter and setter method. The names of the methods will begin with get and set, followed by the field name with its first letter converted to uppercase. Thus, the above field will generate the following methods in the C++ class:
int getTimeToLive() const; void setTimeToLive(int timeToLive);
The method names are derived from the field name, but they can be customized with the @getter and @setter properties, as shown below:
int timeToLive @getter(getTTL) @setter(setTTL);
The choice of C++ type used for the data member and the getter/setter methods can be overridden with the help of the @cppType property (and on a more fine-grained level, with @datamemberType, @argType and @returnType), although this it is rarely useful.
Initial values for fields can be specified after an equal sign, like so:
int version = HTTP_VERSION; string method = "GET"; string resource = "/"; bool keepAlive = true; int timeout = 5*60;
Any phrase that is a valid C++ expression can be used as initializer value. (The message compiler does not check the syntax of the values, it merely copies them into the generated C++ file.)
For array fields, the initializer specifies the value for individual array elements. There is no syntax for initializing an array with a list of values.
In a subclass, it is possible to override the initial value of an inherited field. The syntax is similar to that of a field definition with initial value, only the data type is missing.
An example:
packet Ieee80211Frame { int frameType; ... }; packet Ieee80211DataFrame extends Ieee80211Frame { frameType = DATA_FRAME; // assignment of inherited field ... };
It may seem like the message compiler would need the definition of the base class to check the definition of the field being assigned. However, it is not the case. The message compiler trusts that such field exists; or rather, it leaves the check to the C++ compiler.
What the message compiler actually does is derives a setter method name from the field name, and generates a call to it into the constructor. Thus, the generated constructor for the above packet type would be something like this:
Ieee80211DataFrame::Ieee80211DataFrame(const char *name, int kind) : Ieee80211Frame(name, kind) { this->setFrameType(DATA_FRAME); ... }
This implementation also lets one initialize cMessage / cPacket fields such as message kind or packet length:
packet UDPPacket { byteLength = 16; // results in 'setByteLength(16);' being placed into ctor };
A field can be marked as const by the using const keyword. A const field only has a (const) data member and a getter function, but no setter. The value can be provided via an initializer. An example:
const int foo = 24;
This generates a const int data member in the class, initialized to 24, and a getter member function that returns its value:
int getFoo() const;
Array fields cannot be const.
Note that a pointer field may also be marked const, but const is interpreted differently in that case: as a mutable field that holds a pointer to a const object.
One use of const is to implement computed fields. For that, the field needs to be annotated with the @custom or @customImpl property to allow for a custom implementation to be supplied for the getter. The custom getter can then encapsulate the computation of the field value. Customization is covered in section [6.10].
Abstract fields is a way to allow a custom implementation (such as storage, getter/setter methods, etc.) to be provided for a field. For a field marked as abstract, the message compiler does not generate a data member, and generated getter/setter methods will be pure virtual. It is expected that the pure virtual methods will be implemented in a subclass (possibly via @customize, see section [6.10]).
A field is declared abstract by using the abstract keyword or the @abstract property (the two are equivalent).
abstract bool urgentBit; // or: bool urgentBit @abstract;
The generated pure virtual methods:
virtual bool getUrgentBit() const = 0; virtual void setUrgentBit(bool urgentBit) = 0;
Alternatives to abstract, at least for certain use cases, are @custom and @customImpl (see section [6.10]).
Fixed-size arrays can be declared with the usual syntax of putting the array size in square brackets after the field name:
int route[4];
The generated getter and setter methods will have an extra k argument (the array index), and a third method that returns the array size is also generated:
int getRoute(size_t k) const; void setRoute(size_t k, int route); size_t getRouteArraySize() const;
When the getter or setter method is called with an index that is out of bounds, an exception is thrown.
The method names can be overridden with the @getter, @setter and @sizeGetter properties. To use another C++ type for array size and indices instead of the default size_t, specify the @sizeType property.
When a default value is given, it is interpreted as a scalar for filling the array with. There is no syntax for initializing an array with a list of values.
int route[4] = -1; // all elements set to -1
If the array size is not known in advance, the field can be declared to have a variable size by using an empty pair in brackets:
int route[];
In this case, the generated class will have extra methods in addition to the getter and setter: one for resizing the array, one for getting the array size, plus methods for inserting an element at a given position, appending an element, and erasing an element at a given position.
int getRoute(size_t k) const; void setRoute(size_t k, int route); void setRouteArraySize(size_t size); size_t getRouteArraySize() const; void insertRoute(size_t k, int route); void appendRoute(int route); void eraseRoute(size_t k);
The default array size is zero. Elements can be added by calling the inserter or the appender method, or resizing the array and setting individual elements.
Internally, all methods that change the array size (inserter, appender, resizer) always allocate a new array, and copy existing values over to the new array. Therefore, when adding a large number elements, it is recommended to resize the array first, instead of calling the appender method multiple times.
The method names can be overridden with the @getter, @setter, @sizeGetter, @sizeSetter, @inserter, @appender and @eraser field properties. To use another C++ type for array size and indices instead of the default size_t, specify the @sizeType property.
When a default value is given, it is used for initializing new elements when the array is expanded.
int route[] = -1;
Classes and structs may also be used as as fields, not only primitive types and string. For example, given a class named IPAddress, one can write the following field:
IPAddress sourceAddress;
The IPAddress type must be known to the message compiler.
The generated class will contain an IPAddress data member, and the following member functions:
const IPAddress& getSourceAddress() const; void setSourceAddress(const IPAddress& sourceAddress); IPAddress& getSourceAddressForUpdate();
Note that in addition to the getter and setter, a mutable getter (get...ForUpdate) is also generated, which allows the stored value (object or struct) to be modified in place.
By default, values are passed by reference. This can be changed by specifying the @byValue property:
IPAddress sourceAddress @byValue;
This generates the following member functions:
virtual IPAddress getSourceAddress() const; virtual void setSourceAddress(IPAddress sourceAddress);
Note that both member functions use pass-by-value, and that the mutable getter function is not generated.
Specifying const will cause only a getter function to be generated but no setter or mutable getter, as shown before in [6.7.4].
Array fields are treated similarly, the difference being that the getter and setter methods take an extra index argument:
IPAddress route[];
The generated methods:
void setRouteArraySize(size_t size); size_t getRouteArraySize() const; const IPAddress& getRoute(size_t k) const; IPAddress& getRouteForUpdate(size_t k); void setRoute(size_t k, const IPAddress& route); void insertRoute(size_t k, const IPAddress& route); void appendRoute(const IPAddress& route); void eraseRoute(size_t k);
The field type may be a pointer, both for scalar and array fields. Pointer fields come in two flavours: owning and non-owning. A non-owning pointer field just stores the pointer value regardless of the ownership of the object it points to, while an owning pointer holds the ownership of the object. This section discusses non-owning pointer fields.
Example:
cModule *contextModule; // missing @owner: non-owning pointer field
The generated methods:
const cModule *getContextModule() const; void setContextModule(cModule *contextModule); cModule *getContextModuleForUpdate();
If the field is marked const, then the setter will take a const pointer, and the getForUpdate() method is not generated:
const cModule *contextModule;
The output:
const cModule *getContextModule() const; void setContextModule(const cModule *contextModule);
This section discusses pointer fields that own the objects they point to, that is, are responsible for deallocating the object when the object containing the field (let's refer to it as container object) is deleted.
For all owning pointer fields in a class, the destructor of the class deletes the owned objects, the dup() method and the copy constructor duplicate the owned objects for the newly created object, and the assignment operator (operator=) does both: the old objects in the destination object are deleted, and replaced by clones of the objects in the source object.
When the owned object is a subclass of cOwnedObject that keeps track of its owner, the code generated for the container class invokes the take() and drop() methods at the appropriate times to manage the ownership.
Example:
cPacket *payload @owned;
The generated methods:
const cPacket *getPayload() const; cPacket *getPayloadForUpdate(); void setPayload(cPacket *payload); cPacket *removePayload();
The getter and mutable getter return the stored pointer (or nullptr if there is none).
The remover method releases the ownership of the stored object, sets the field to nullptr, and returns the object.
The setter method behavior depends on the presence of the @allowReplace property. By default (when @allowReplace is absent), the setter does not allow replacing the object. That is, when the setter is invoked on a field that already contains an object (the pointer is non-null), an error is raised: "A value is already set, remove it first with removePayload()". One must call removePayload() before setting a new object.
When @allowReplace is specified for the field, there is no need to call te remover method before setting a new value, because the setter method deletes the old object before storing the new one.
cPacket *payload @owned @allowReplace; // allow setter to delete the old object
If the field is marked const, then the getForUpdate() method is not generated, and the setter takes a const pointer.
const cPacket *payload @owned;
The generated methods:
const cPacket *getPayload() const; void setPayload(const cPacket *payload); cPacket *removePayload();
The name of the remover method (which is the only extra method compared to non-pointer fields) can be customized using the @remover property.
It is possible to have C++ code fragments injected directly into the generated code. This is done with the cplusplus keyword optionally followed by a target in parentheses, and the code fragment enclosed in double curly braces.
The target specifies where to insert the code fragment in the generated header or implementation file; we'll get to it in a minute.
As far as a the code fragment is concerned, the message compiler does not try to
make sense of it, just simply copies it into the generated source file at the
requested location. The code fragment should be formatted so that it does not
contain a double close curly brace (}}) because it would be interpreted as
end of the fragment block.
cplusplus {{ #include "FooDefs.h" #define SOME_CONSTANT 63 }}
The target can be h (the generated header file -- this is the default), cc (the generated .cc file), the name of a type generated in the same message file (content is inserted in the declaration of the type, just before the closing curly brace), or a member function name of one such type.
cplusplus blocks with the target h are customarily used to insert #include directives, commonly used constants or macros (e.g. #defines), or, rarely, typedefs and other elements into the generated header. The fragments are pasted into the namespace which is open at that point. Note that includes should always be placed into a cplusplus(h) block above the first namespace declaration in the message file.
cplusplus blocks with cc as target allow you to insert code into the .cc file, e.g. implementations of member functions. This is useful e.g with custom-implementation fields (@customImpl, see [6.10.4]).
cplusplus blocks with a type name as target allow you to insert new data members and member functions into the class. This is useful e.g with custom fields (@custom, see [6.10.5]).
To inject code into the implementation of a member function of a generated class, specify <classname>::<methodname> as target. Supported methods include the constructor, copy constructor (use Foo& as name), destructor, operator=, copy(), parsimPack(), parsimUnpack(), etc., and the per-field generated methods (setter, getter, etc.).
The message compiler only allows types it knows about to be used for fields or base classes. If you want to use to types not generated by the message compiler, you need to do the following:
For the first one can be achieved with the @existingClass property. When a type (class or struct) is annotated with @existingClass, the message compiler remembers the definition, but assumes that the class (or struct) already exist in C++ code, and does not generate it. (However, it will still generate a class descriptor, see section [6.11].)
The second task is achieved by adding a cplusplus block with an #include directive to the message file.
For example, suppose we have a hand-written ieee802::MACAddress class defined in MACAddress.h that we would like to use for fields in multiple message files. One way to make this possible is to add a MACAddress.msg file alongside the header with the following content:
// MACAddress.msg cplusplus {{ #include "MACAddress.h" }} class ieee802::MACAddress // a separate namespace decl would also do { @existingClass; int8_t octet[6]; // assumes class has getOctet(k) and setOctet(k) }
As exemplified above, for existing classes it is possible to announce them with their namespace-qualified name, there is no need for separate namespace line.
This message file can be imported into all other message files that need MACAddress, for example like this:
import MACAddress; packet EthernetFrame { ieee802::MACAddress source; ieee802::MACAddress destination; ... }
There are several possibilities for customizing a generated class:
The following sections explore the above possibilities.
The names and some other properties of generated methods can be influenced with metadata annotations (properties).
The following field properties exist for overriding method names: @getter, @setter, @getterForUpdate, @remover, @sizeGetter, @sizeSetter, @inserter, @appender and @eraser.
To override data types used by the data member and its accessor methods, use @cppType, @datamemberType, @argType, or @returnType.
To override the default size_t type used for array size and indices, use @sizeType.
Consider the following example:
packet IPPacket { int ttl @getter(getTTL) @setter(setTTL); Option options[] @sizeGetter(getNumOptions) @sizeSetter(setNumOptions) @sizetype(short); }
The generated class would have the following methods (note the differences from the default names getTtl(), setTtl(), getOptions(), setOptions(), getOptionsArraySize(), getOptionsArraySize(); also note that indices and array sizes are now short):
virtual int getTTL() const; virtual void setTTL(int ttl); virtual const Option& getOption(short k) const; virtual void setOption(short k, const Option& option); virtual short getNumOptions() const; virtual void setNumOptions(short n);
In some older simulation models you may also see the use of the @omitGetVerb class property. This property tells the message compiler to generate getter methods without the “get” prefix, e.g. for a sourceAddress field it would generate a sourceAddress() method instead of the default getSourceAddress(). It is not recommended to use @omitGetVerb in new models, because it is inconsistent with the accepted naming convention.
Generally, literal C++ blocks (the cplusplus keyword) are the way to inject code into the body of individual methods, as described in [6.8].
The @beforeChange class property can be used to designate a member function which is to be called before any mutator code (in setters, non-const getters, assignment operator, etc.) executes. This can be used to implement e.g. a dirty flag or some form of immutability (i.e. freeze the state of the object).
The @str class property aims at simplifying adding an str() method in the generated class. Having an str() method is often useful for debugging, and it also has a special role in class descriptors (see [6.11.6]).
When @str is present, an std::string str() const method is generated for the class. The method's implementation will contain a single return keyword, with the value of the @str property copied after it.
Example:
class Location { double lat; double lon; @str("(" + std::to_string(getLat()) + "," + std::to_string(getLon()) + ")"); }
It will result in the following str() method to be generated as part of the Location class:
std::string Location::str() const { return "(" + std::to_string(getLat()) + "," + std::to_string(getLon()) + ")"; }
When member functions generated for a field need customized implementation and method-targeted C++ blocks are not sufficient, the customImpl property can be of help. When a field is marked customImpl, the message compiler will skip generating the implementations of its accessor methods in the .cc file, allowing the user to supply their own versions.
Here is a simple example. The methods in it do not perform anything extra compared to the default generated versions, but they illustrate the principle.
class Packet { int hopCount @customImpl; } cplusplus(cc) {{ int Packet::getHopCount() const { return hopCount; // replace/extend with extra code } void Packet::setHopCount(int value) { hopCount = value; // replace/extend with extra code } }}
If a field is marked with @custom, the field will only appear in the class descriptor, but no code is generated for it at all. One can inject the code that implements the field (data member, getter, setter, etc.) via targeted cplusplus blocks ([6.8]). @custom is a good way to go when you want the field to have a different underlying storage or different accessor methods than normally generated by the message compiler. (For the latter case, however, be aware that the generated class descriptor assumes the presence of certain accessor methods for the field, although the set of expected methods can be customized to a degree. See [6.11] for details.)
The following example uses @custom to implement a field that acts a stack (has push() and pop() methods), and uses std::vector as the underlying data structure.
cplusplus {{ #include <vector> }} class MPLSHeader { int32_t label[] @custom @sizeGetter(getNumLabels) @sizeSetter(setNumLabels); } cplusplus(MPLSHeader) {{ protected: std::vector<int32_t> labels; public: // expected methods: virtual void setNumLabels(size_t size) {labels.resize(size);} virtual size_t getNumLabels() const {return labels.size();} virtual int32_t getLabel(size_t k) const {return labels.at(k);} virtual void setLabel(size_t k, int32_t label) {labels.at(k) = label;} // new methods: virtual void pushLabel(int32_t label) {labels.push_back(label);} virtual int32_t popLabel() {auto l=labels.back();labels.pop_back();return l;} }} cplusplus(MPLSHeader::copy) {{ labels = other.labels; }}
The last C++ block is needed so that the copy constructor and the operator= method also copies the new field. (copy() is a member function where the common part of the above two are factored out, and the C++ block injects code in there.)
Another way of customizing the generated code is by employing what is known as the Generation Gap design pattern, proposed by John Vlissides. The idea is that the customization can be done while subclassing the generated class, overriding whichever member functions need to be different from their generated versions.
This feature is enabled by adding the @customize property on the class. Doing so will cause the message compiler to generate an intermediate class instead of the final one, and the user will subclass the intermediate class to obtain the real class. The name of the intermediate class is obtained by appending _Base to the class name. The subclassing code can be in an entirely different header and .cc file from the generated one, so this method does not require the use of cplusplus blocks.
Consider the following example:
packet FooPacket { @customize(true); ... };
The message compiler will generate a FooPacket_Base class instead of FooPacket. It is then the user's task to subclass FooPacket_Base to derive FooPacket, while adding extra data members and adding/overriding methods to achieve the goals that motivated the customization.
There is a minimum amount of code you have to write for FooPacket, because not everything can be pre-generated as part of FooPacket_Base (e.g. constructors cannot be inherited). This minimum code, which usually goes into a header file, is the following:
class FooPacket : public FooPacket_Base { private: void copy(const FooPacket& other) { ... } public: FooPacket(const char *s=nullptr, short kind=0) : FooPacket_Base(s,kind) {} FooPacket(const FooPacket& other) : FooPacket_Base(other) {copy(other);} FooPacket& operator=(const FooPacket& other) {if (this==&other) return *this; FooPacket_Base::operator=(other); copy(other); return *this;} virtual FooPacket *dup() const override {return new FooPacket(*this);} };
The generated constructor, copy constructor, operator=, dup() can be usually be copied verbatim. The only method that needs to be custom code is copy(). It is shared by the copy constructor and operator=, and should take care of copying the new data members you added as part of FooPacket.
In addition to the above, the implementation (.cc) file should contain the registration of the new class:
Register_Class(FooPacket);
Abstract fields, introduced in [6.7.5], are an alternative to @custom (see [6.10.5]) for allowing a custom implementation (such as storage, getter/setter methods, etc.) to be provided for a field. For a field marked abstract, the message compiler does not generate a data member, and generated getter/setter methods will be pure virtual.
Abstract fields are most often used together with the Generation Gap pattern (see [6.10.6]), so that one can immediately supply a custom implementation.
The following example demonstrates the use of abstract fields for creating an array field that uses std::vector as underlying implementation:
packet FooPacket { @customize(true); abstract int foo[]; // impl will use std::vector<int> }
If you compile the above code, in the generated C++ code you will only find abstract methods for foo, but no underlying data member or method implementation. You can implement everything as you like. You can then write the following C++ file to implement foo with std::vector (some details omitted for brevity):
#include <vector> #include "FooPacket_m.h" class FooPacket : public FooPacket_Base { protected: std::vector<int> foo; public: // constructor and other methods omitted, see below ... virtual int getFoo(size_t k) {return foo[k];} virtual void setFoo(size_t k, int x) {foo[k]=x;} virtual void addFoo(int x) {foo.push_back(x);} virtual void setFooArraySize(size_t size) {foo.resize(size);} virtual size_t getFooArraySize() const {return foo.size();} }; Register_Class(FooPacket);
Some additional boilerplate code is needed so that the class conforms to conventions, and duplication and copying works properly:
FooPacket(const char *name=nullptr, int kind=0) : FooPacket_Base(name,kind) { } FooPacket(const FooPacket& other) : FooPacket_Base(other.getName()) { operator=(other); } FooPacket& operator=(const FooPacket& other) { if (&other==this) return *this; FooPacket_Base::operator=(other); foo = other.foo; return *this; } virtual FooPacket *dup() { return new FooPacket(*this); }
For each generated class and struct, the message compiler also generates an associated descriptor class, which class carries “reflection” information about the new class. The descriptor class encapsulates virtually all information that the original message definition contains, and exposes it via member functions. Reflection information allows inspecting object contents down to field level in Qtenv, filtering objects by a filter expression that refers to object fields, serializing messages-packets in a readable form for the eventlog file, and has several further potential uses.
The descriptor class is subclassed from cClassDescriptor. It has methods for enumerating fields (getFieldCount(), getFieldName(), getFieldTypeString(), etc.), for getting and setting a field's value in string form (getFieldAsString(), setFieldAsString()) and as cValue (getFieldValue(), setFieldValue()), for exploring the class hierarchy (getBaseClassDescriptor(), etc.), for accessing class and field properties, and for similar tasks.
Classes derived from cObject have a virtual member function getDescriptor that returns their associated descriptor. For other classes, it is possible to obtain the descriptor using cClassDescriptor::getDescriptorFor() with the class name as argument.
Several properties control the creation and details of the class descriptor.
The @descriptor class property can be used to control the generation of the descriptor class. @descriptor(readonly) instructs the message compiler not to generate field setters for the descriptor, and @descriptor(false) instructs it not to generate a description class for the class at all.
It is also possible to use (or abuse) the message compiler for generating a descriptor class for an existing class. To do that, write a message definition for your existing class (for example, if it has int getFoo() and setFoo(int) methods, add an int foo field to the message definition), and mark it with @existingClass. This will tell the message compiler that it should not generate an actual class (as it already exists), only a descriptor class.
When an object is shown in Qtenv's Object Inspector pane, Qtenv obtains all information it displays from the object's descriptor. There are several properties that can be used to customize how a field appears in the Object Inspector:
Several of the properties which are for overriding field accessor method names (@getter, @setter, @sizeGetter, @sizeSetter, etc., see [6.10.1]) have a secondary purpose. When generating a descriptor for an existing class (see @existingClass), those properties specify how the descriptor can access the field, i.e. what code to generate in the implementation of the descriptor's various methods. In that use case, such properties may contain code fragments or a function call template instead of a method name.
To be able to generate the descriptor's getFieldValueAsString() member function, the message compiler needs to know how to convert the return type of the getter to std::string. Similarly, for setFieldValueAsString() it needs to know how to convert (or parse) a string to obtain the setter's argument type. For the built-in types (int, double, etc.) this information is pre-configured, but for other types the user needs to supply it via two properties:
These properties can be specified on the class (where it will be applied to fields of that type), or directly on fields. Multiple syntaxes are accepted:
Example:
class IPAddress { @existingClass; @opaque; @toString(.str()); // use IPAddress::str() to produce a string @fromString(IPAddress($)); // use constructor; '$' will be replaced by the string }
If the @toString property is missing, the message compiler generates code which calls the str() member function on the value returned by the getter, provided that it knows for certain that the corresponding type has such method (the type is derived from cObject, or has the @str property).
If there is no @toString property and no (known) str() method, the descriptor will return the empty string.
Similarly to @toString/@fromString described in the previous section, the @toValue and @fromValue properties are used define how to convert the field's value to and from cValue for the descriptor's getFieldValue() and setFieldValue() methods.
There are several boolean-valued properties which enable/disable various features in the descriptor:
OMNeT++ has an extensive C++ class library available to the user for implementing simulation models and model components. Part of the class library's functionality has already been covered in the previous chapters, including discrete event simulation basics, the simple module programming model, module parameters and gates, scheduling events, sending and receiving messages, channel operation and programming model, finite state machines, dynamic module creation, signals, and more.
This chapter discusses the rest of the simulation library. Topics will include logging, random number generation, queues, topology discovery and routing support, and statistics and result collection. This chapter also covers some of the conventions and internal mechanisms of the simulation library to allow one extending it and using it to its full potential.
Classes in the OMNeT++ simulation library are part of the omnetpp namespace. To use the OMNeT++ API, one must include the omnetpp.h header file and either import the namespace with using namespace omnetpp, or qualify names with the omnetpp:: prefix.
Thus, simulation models will contain the
#include <omnetpp.h>
line, and often also
using namespace omnetpp;
When writing code that should work with various versions of OMNeT++, it is often useful to have compile-time access to the OMNeT++ version in a numeric form. The OMNETPP_VERSION macro exists for that purpose, and it is defined by OMNeT++ to hold the version number in the form major*256+minor. For example, in OMNeT++ 4.6 it was defined as
#define OMNETPP_VERSION 0x406
Most classes in the simulation library are derived from cObject, or its subclasses cNamedObject and cOwnedObject. cObject defines several virtual member functions that are either inherited or redefined by subclasses. Otherwise, cObject is a zero-overhead class as far as memory consumption goes: it purely defines an interface but has no data members. Thus, having cObject a base class does not add anything to the size of a class if it already has at least one virtual member function.
The subclasses cNamedObject and cOwnedObject add data members to implement more functionality. The following sections discuss some of the practically important functonality defined by cObject.
The most useful and most visible member functions of cObject are getName() and getFullName(). The idea behind them is that many objects in OMNeT++ have names by default (for example, modules, parameters and gates), and even for other objects, having a printable name is a huge gain when it comes to logging and debugging.
getFullName() is important for gates and modules, which may be part of gate or module vectors. For them, getFullName() returns the name with the index in brackets, while getName() only returns the name of the module or gate vector. That is, for a gate out[3] in the gate vector out[10], getName() returns "out", and getFullName() returns "out[3]". For other objects, getFullName() simply returns the same string as getName(). An example:
cGate *gate = gate("out", 3); // out[3] EV << gate->getName(); // prints "out" EV << gate->getFullName(); // prints "out[3]"
cObject merely defines these member functions, but they return an empty string. Actual storage for a name string and a setName() method is provided by the class cNamedObject, which is also an (indirect) base class for most library classes. Thus, one can assign names to nearly all user-created objects. It it also recommended to do so, because a name makes an object easier to identify in graphical runtimes like Qtenv.
By convention, the object name is the first argument to the constructor of every class, and it defaults to the empty string. To create an object with a name, pass the name string (a const char* pointer) as the first argument of the constructor. For example:
cMessage *timeoutMsg = new cMessage("timeout");
To change the name of an object, use setName():
timeoutMsg->setName("timeout");
Both the constructor and setName() make an internal copy of the string,
instead of just storing the pointer passed to them.
For convenience and efficiency reasons, the empty string "" and nullptr are treated as interchangeable by library objects. That is, "" is stored as nullptr but returned as "". If one creates a message object with either nullptr or "" as its name string, it will be stored as nullptr, and getName() will return a pointer to a static "".
getFullPath() returns the object's hierarchical name. This name is produced by prepending the full name (getFullName()) with the parent or owner object's getFullPath(), separated by a dot. For example, if the out[3] gate in the previous example belongs to a module named classifier, which in turn is part of a network called Queueing, then the gate's getFullPath() method will return "Queueing.classifier.out[3]".
cGate *gate = gate("out", 3); // out[3] EV << gate->getName(); // prints "out" EV << gate->getFullName(); // prints "out[3]" EV << gate->getFullPath(); // prints "Queueing.classifier.out[3]"
The getFullName() and getFullPath() methods are extensively used in graphical runtime environments like Qtenv, and also when assembling runtime error messages.
In contrast to getName() and getFullName() which return const char * pointers, getFullPath() returns std::string. This makes no difference when logging via EV<<, but when getFullPath() is used as a "%s" argument to sprintf(), one needs to write getFullPath().c_str().
char buf[100]; sprintf("msg is '%80s'", msg->getFullPath().c_str()); // note c_str()
The getClassName() member function returns the class name as a string, including the namespace. getClassName() internally relies on C++ RTTI.
An example:
const char *className = msg->getClassName(); // returns "omnetpp::cMessage"
The dup() member function creates an exact copy of the object, duplicating contained objects also if necessary. This is especially useful in the case of message objects.
cMessage *copy = msg->dup();
dup() delegates to the copy constructor. Classes also declare an assignment operator (operator=()) which can be used to copy contents of an object into another object of the same type. dup(), the copy constructor and the assignment operator all perform deep coping: objects contained in the copied object will also be duplicated if necessary.
operator=() differs from the other two in that it does not copy the object's name string, i.e. does not invoke setName(). The rationale is that the name string is often used for identifying the particular object instance, as opposed to being considered as part of its contents.
There are several container classes in the library (cQueue, cArray etc.) For many of them, there is a corresponding iterator class that one can use to loop through the objects stored in the container.
For example:
cQueue queue; //... for (cQueue::Iterator it(queue); !it.end(); ++it) { cObject *containedObject = *it; //... }
When library objects detect an error condition, they throw a C++ exception. This exception is then caught by the simulation environment which pops up an error dialog or displays the error message.
At times it can be useful to be able stop the simulation at the place of the error (just before the exception is thrown) and use a C++ debugger to look at the stack trace and examine variables. Enabling the debug-on-errors or the debugger-attach-on-error configuration option lets you do that -- check it in section [11.12].
In a simulation there are often thousands of modules which simultaneously carry out non-trivial tasks. In order to understand a complex simulation, it is essential to know the inputs and outputs of algorithms, the information on which decisions are based, and the performed actions along with their parameters. In general, logging facilitates understanding which module is doing what and why.
OMNeT++ makes logging easy and consistent among simulation models by providing its own C++ API and configuration options. The API provides efficient logging with several predefined log levels, global compile-time and runtime filters, per-component runtime filters, automatic context information, log prefixes and other useful features. In the following sections, we look at how to write log statements using the OMNeT++ logging API.
The exact way log messages are displayed to the user depends on the user interface. In the command-line user interface (Cmdenv), the log is simply written to the standard output. In the Qtenv graphical user interface, the main window has an area for displaying the log output from the currently displayed compound module.
All logging must be categorized into one of the predefined log levels. The assigned log level determines how important and how detailed a log statement is. When deciding which log level is appropriate for a particular log statement, keep in mind that they are meant to be local to components. There's no need for a global agreement among all components, because OMNeT++ provides per component filtering. Log levels are mainly useful because log output can be filtered based on them.
OMNeT++ provides several C++ macros for the actual logging. Each one of these macros act like a C++ stream, so they can be used similarly to std::cout with operator<< (shift operator).
The actual logging is as simple as writing information into one of these special log streams as follows:
EV_ERROR << "Connection to server is lost.\n"; EV_WARN << "Queue is full, discarding packet.\n"; EV_INFO << "Packet received , sequence number = " << seqNum << "." << endl; EV_TRACE << "routeUnicastPacket(" << packet << ");" << endl;
The above C++ macros work well from any C++ class, including OMNeT++ modules. In fact, they automatically capture a number of context specific information such as the current event, current simulation time, context module, this pointer, source file and line number. The final log lines will be automatically extended with a prefix that is created from the captured information (see section [10.6]).
In static class member functions or in non-class member functions an extra
EV_STATICCONTEXT macro must be present to make sure that normal log
macros compile.
void findModule(const char *name, cModule *from) { EV_STATICCONTEXT; EV_TRACE << "findModule(" << name << ", " << from << ");" << endl;
Sometimes it might be useful to further classify log statements into user defined log categories. In the OMNeT++ logging API, a log category is an arbitrary string provided by the user.
For example, a module test may check for a specific log message in the test's output. Putting the log statement into the test category ensures that extra care is taken when someone changes the wording in the statement to match the one in the test.
Similarily to the normal C++ log macros, there are separate log macros for each log level which also allow specifying the log category. Their name is the same as the normal variants' but simply extended with the _C suffix. They take the log category as the first parameter before any shift operator calls:
EV_INFO_C("test") << "Received " << numPacket << " packets in total.\n";
Occasionally it's easier to produce a log line using multiple statements. Mostly because some computation has to be done between the parts. This can be achieved by omitting the new line from the log statements which are to be continued. And then subsequent log statements must use the same log level, otherwise an implicit new line would be inserted.
EV_INFO << "Line starts here, "; ... // some other code without logging EV_INFO << "and it continues here" << endl;
Assuming a simple log prefix that prints the log level in brakets, the above code fragment produces the following output in Cmdenv:
[INFO] Line starts here, and it continues here
Sometimes it might be useful to split a line into multiple lines to achieve better formatting. In such cases, there's no need to write multiple log statements. Simply insert new lines into the sequence of shift operator calls:
EV_INFO << "First line" << endl << "second line" << endl;
In the produced output, each line will have the same log prefix, as shown below:
[INFO] First line [INFO] Second line
The OMNeT++ logging API also supports direct printing to a log stream. This is mainly useful when printing is really complicated algorithmically (e.g. printing a multi-dimensional value). The following code could produce multiple log lines each having the same log prefix.
void Matrix::print(std::stream &output) { ... } void Matrix::someFunction() { print(EV_INFO);
OMNeT++ does its best to optimize the performance of logging. The implementation fully supports conditinal compilation of log statements based on their log level. It automatically checks whether the log is recorded anywhere. It also checks global and per-component runtime log levels. The latter is efficiently cached in the components for subsequent checks. See section [10.6] for more details on how to configure these log levels.
The implementation of the C++ log macros makes use of the fact that the operator<< is bound more loosely than the conditional operator (?:). This solves conditional compilation, and also helps runtime checks by redirecting the output to a null stream. Unfortunately the operator<< calls are still evaluated on the null stream, even if the log level is disabled.
Rarely just the computation of log statement parameters may be very expensive, and thus it must be avoided if possible. In this case, it is a good idea to make the log statement conditional on whether the output is actually being displayed or recorded anywhere. The cEnvir::isLoggingEnabled() call returns false when the output is disabled, such as in “express” mode. Thus, one can write code like this:
if (!getEnvir()->isLoggingEnabled()) EV_DEBUG << "CRC: " << computeExpensiveCRC(packet) << endl;
Random numbers in simulation are usually not really random. Rather, they are produced using deterministic algorithms. Based on some internal state, the algorithm performs some deterministic computation to produce a “random” number and the next state. Such algorithms and their implementations are called random number generators or RNGs, or sometimes pseudo random number generators or PRNGs to highlight their deterministic nature. The algorithm's internal state is usually initialized from a smaller seed value.
Starting from the same seed, RNGs always produce the same sequence of random numbers. This is a useful property and of great importance, because it makes simulation runs repeatable.
RNGs are rarely used directly, because they produce uniformly distributed random numbers. When non-uniform random numbers are needed, mathematical transformations are used to produce random numbers from RNG input that correspond to specific distributions. This is called random variate generation, and it will be covered in the next section, [7.4].
It is often advantageous for simulations to use random numbers from multiple RNG instances. For example, a wireless network simulation may use one RNG for generating traffic, and another RNG for simulating transmission errors in the noisy wireless channel. Since seeds for individual RNGs can be configured independently, this arrangement allows one e.g. to perform several simulation runs with the same traffic but with bit errors occurring in different places. A simulation technique called variance reduction is also related to the use of different random number streams. OMNeT++ makes it easy to use multiple RNGs in various flexible configurations.
When assigning seeds, it is important that different RNGs and also different simulation runs use non-overlapping series of random numbers. Overlap in the generated random number sequences can introduce unwanted correlation in the simulation results.
OMNeT++ comes with the following RNG implementations.
By default, OMNeT++ uses the Mersenne Twister RNG (MT) by M. Matsumoto and T. Nishimura [Matsumoto98]. MT has a period of 219937-1, and 623-dimensional equidistribution property is assured. MT is also very fast: as fast or faster than ANSI C's rand().
OMNeT++ releases prior to 3.0 used a linear congruential generator (LCG) with a cycle length of 231-2, described in [Jain91], pp. 441-444,455. This RNG is still available and can be selected from omnetpp.ini (Chapter [11]). This RNG is only suitable for small-scale simulation studies. As shown by Karl Entacher et al. in [Entacher02], the cycle length of about 231 is too small (on todays fast computers it is easy to exhaust all random numbers), and the structure of the generated “random” points is too regular. The [Hellekalek98] paper provides a broader overview of issues associated with RNGs used for simulation, and it is well worth reading. It also contains useful links and references on the topic.
When a simulation is executed under Akaroa control (see section [11.20]), it is also possible to let OMNeT++ use Akaroa's RNG. This needs to be configured in omnetpp.ini (section [10.5]).
OMNeT++ allows plugging in your own RNGs as well. This mechanism, based on the cRNG interface, is described in section [17.5]. For example, one candidate to include could be L'Ecuyer's CMRG [LEcuyer02] which has a period of about 2191 and can provide a large number of guaranteed independent streams.
OMNeT++ can be configured to make several RNGs available for the simulation model. These global or physical RNGs are numbered from 0 to numRNGs-1, and can be seeded independently.
However, usually model code doesn't directly work with those RNGs. Instead, there is an indirection step introduced for additional flexibility. When random numbers are drawn in a model, the code usually refers to component-local or logical RNG numbers. These local RNG numbers are mapped to global RNG indices to arrive at actual RNG instances. This mapping occurs on per-component basis. That is, each module and channel object contains a mapping table similar to the following:
Local RNG index | Global RNG | |
0 | --> | 0 |
1 | --> | 0 |
2 | --> | 2 |
3 | --> | 1 |
4 | --> | 1 |
5 | --> | 3 |
In the example, the module or channel in question has 6 local (logical) RNGs that map to 4 global (physical) RNGs.
The local-to-global mapping, as well as the number of number of global RNGs and their seeding can be configured in omnetpp.ini (see section [10.5]).
The mapping can be set up arbitrarily, with the default being identity mapping (that is, local RNG k refers to global RNG k.) The mapping allows for flexibility in RNG and random number streams configuration -- even for simulation models which were not written with RNG awareness. For example, even if modules in a simulation only use the default, local RNG number 0, one can set up mapping so that different groups of modules use different physical RNGs.
In theory, RNGs could also be instantiated and used directly from C++ model code. However, doing so is not recommended, because the model would lose configurability via omnetpp.ini.
RNGs are represented with subclasses of the abstract class cRNG. In addition to random number generation methods like intRand() and doubleRand(), the cRNG interface also includes methods like selfTest() for basic integrity checking and getNumbersDrawn() to query the number of random numbers generated.
RNGs can be accessed by local RNG number via cComponent's getRNG(k) method. To access global global RNGs directly by their indices, one can use cEnvir's getRNG(k) method. However, RNGs rarely need to be accessed directly. Most simulations will only use them via random variate generation functions, described in the next section.
Random numbers produced by RNGs are uniformly distributed. This section describes how to obtain streams of non-uniformly distributed random numbers from various distributions.
The simulation library supports the following distributions:
Distribution | Description |
Continuous distributions | |
uniform(a, b) | uniform distribution in the range [a,b) |
exponential(mean) | exponential distribution with the given mean |
normal(mean, stddev) | normal distribution with the given mean and standard deviation |
truncnormal(mean, stddev) | normal distribution truncated to nonnegative values |
gamma_d(alpha, beta) | gamma distribution with parameters alpha>0, beta>0 |
beta(alpha1, alpha2) | beta distribution with parameters alpha1>0, alpha2>0 |
erlang_k(k, mean) | Erlang distribution with k>0 phases and the given mean |
chi_square(k) | chi-square distribution with k>0 degrees of freedom |
student_t(i) | student-t distribution with i>0 degrees of freedom |
cauchy(a, b) | Cauchy distribution with parameters a,b where b>0 |
triang(a, b, c) | triangular distribution with parameters a<=b<=c, a!=c |
lognormal(m, s) | lognormal distribution with mean m and variance s>0 |
weibull(a, b) | Weibull distribution with parameters a>0, b>0 |
pareto_shifted(a, b, c) | generalized Pareto distribution with parameters a, b and shift c |
Discrete distributions | |
intuniform(a, b) | uniform integer from a..b |
bernoulli(p) | result of a Bernoulli trial with probability 0<=p<=1 (1 with probability p and 0 with probability (1-p)) |
binomial(n, p) | binomial distribution with parameters n>=0 and 0<=p<=1 |
geometric(p) | geometric distribution with parameter 0<=p<=1 |
negbinomial(n, p) | negative binomial distribution with parameters n>0 and 0<=p<=1 |
poisson(lambda) | Poisson distribution with parameter lambda |
Some notes:
There are several ways to generate random numbers from these distributions, as described in the next sections.
The preferred way is to use methods defined on cComponent, the common base class of modules and channels:
double uniform(double a, double b, int rng=0) const; double exponential(double mean, int rng=0) const; double normal(double mean, double stddev, int rng=0) const; ...
These methods work with the component's local RNGs, and accept the RNG index (default 0) in their extra int parameter.
Since most simulation code is located in methods of simple modules, these methods can be usually called in a concise way, without an explicit module or channel pointer. An example:
scheduleAt(simTime() + exponential(1.0), msg);
There are two additional methods, intrand() and dblrand(). intrand(n) generates random integers in the range [0, n-1], and dblrand() generates a random double on [0,1). They also accept an additional local RNG index that defaults to 0.
It is sometimes useful to be able to pass around random variate generators as objects. The classes cUniform, cExponential, cNormal, etc. fulfill this need.
These classes subclass from the cRandom abstract class. cRandom was designed to encapsulate random number streams. Its most important method is draw() that returns a new random number from the stream. cUniform, cExponential and other classes essentially bind the distribution's parameters and an RNG to the generation function.
Let us see for example cNormal. The constructor expects an RNG (cRNG pointer) and the parameters of the distribution, mean and standard deviation. It also has a default constructor, as it is a requirement for Register_Class(). When the default constructor is used, the parameters can be set with setRNG(), setMean() and setStddev(). setRNG() is defined on cRandom. The draw() method, of course, is redefined to return a random number from the normal distribution.
An example that shows the use of a random number stream as an object:
cNormal *normal = new cNormal(getRNG(0), 0, 1); // unit normal distr. printRandomNumbers(normal, 10); ... void printRandomNumbers(cRandom *rand, int n) { EV << "Some numbers from a " << rand->getClassName() << ":" << endl; for (int i = 0; i < n; i++) EV << rand->draw() << endl; }
Another important property of cRandom is that it can encapsulate state. That is, subclasses can be implemented that, for example, return autocorrelated numbers, numbers from a stochastic process, or simply elements of a stored sequence (e.g. one loaded from a trace file).
Both the cComponent methods and the random number stream classes described above have been implemented with the help of standalone generator functions. These functions take a cRNG pointer as their first argument.
double uniform(cRNG *rng, double a, double b); double exponential(cRNG *rng, double mean); double normal(cRNG *rng, double mean, double stddev); ...
One can also specify a distribution as a histogram. The cHistogram, cKSplit and cPSquare classes can be used to generate random numbers from histograms. This feature is documented later, with the statistical classes.
One can easily add support for new distributions. We recommend that you write a standalone generator function first. Then you can add a cRandom subclass that wraps it, and/or module (channel) methods that invoke it with the module's local RNG. If the function is registered with the Define_NED_Function() macro (see [7.12]), it will be possible to use the new distribution in NED files and ini files, as well.
If you need a random number stream that has state, you need to subclass from cRandom.
cQueue is a container class that acts as a queue. cQueue can hold objects of type derived from cObject (almost all classes from the OMNeT++ library), such as cMessage, cPar, etc. Normally, new elements are inserted at the back, and removed from the front.
The member functions dealing with insertion and removal are insert() and pop().
cQueue queue("my-queue"); cMessage *msg; // insert messages for (int i = 0; i < 10; i++) { msg = new cMessage; queue.insert(msg); } // remove messages while(!queue.isEmpty()) { msg = (cMessage *)queue.pop(); delete msg; }
The length() member function returns the number of items in the queue, and empty() tells whether there is anything in the queue.
There are other functions dealing with insertion and removal. The insertBefore() and insertAfter() functions insert a new item exactly before or after a specified one, regardless of the ordering function.
The front() and back() functions return pointers to the objects at the front and back of the queue, without affecting queue contents.
The pop() function can be used to remove items from the tail of the queue, and the remove() function can be used to remove any item known by its pointer from the queue:
queue.remove(msg);
By default, cQueue implements a FIFO, but it can also act as a priority queue, that is, it can keep the inserted objects ordered. To use this feature, one needs to provide a comparison function that takes two cObject pointers, and returns -1, 0 or 1 (see the reference for details). An example of setting up an ordered cQueue:
cQueue queue("queue", someCompareFunc);
If the queue object is set up as an ordered queue, the insert() function uses the ordering function: it searches the queue contents from the head until it reaches the position where the new item needs to be inserted, and inserts it there.
The cQueue::Iterator class lets one iterate over the contents of the queue and examine each object.
The cQueue::Iterator constructor expects the queue object in the first argument. Normally, forward iteration is assumed, and the iteration is initialized to point at the front of the queue. For reverse iteration, specify reverse=true as the optional second argument. After that, the class acts as any other OMNeT++ iterator: one can use the ++ and -- operators to advance it, the * operator to get a pointer to the current item, and the end() member function to examine whether the iterator has reached the end (or the beginning) of the queue.
Forward iteration:
for (cQueue::Iterator iter(queue); !iter.end(), iter++) { cMessage *msg = (cMessage *) *iter; //... }
Reverse iteration:
for (cQueue::Iterator iter(queue, true); !iter.end(), iter--) { cMessage *msg = (cMessage *) *iter; //... }
cArray is a container class that holds objects derived from cObject. cArray implements a dynamic-size array: its capacity grows automatically when it becomes full. cArray stores pointers of objects inserted instead of making copies.
Creating an array:
cArray array("array");
Adding an object at the first free index:
cMsgPar *p = new cMsgPar("par"); int index = array.add(p);
Adding an object at a given index (if the index is occupied, you will get an error message):
cMsgPar *p = new cMsgPar("par"); int index = array.addAt(5,p);
Finding an object in the array:
int index = array.find(p);
Getting a pointer to an object at a given index:
cPar *p = (cPar *) array[index];
You can also search the array or get a pointer to an object by the object's name:
int index = array.find("par"); Par *p = (cPar *) array["par"];
You can remove an object from the array by calling remove() with the object name, the index position or the object pointer:
array.remove("par"); array.remove(index); array.remove(p);
The remove() function doesn't deallocate the object, but it returns the object pointer. If you also want to deallocate it, you can write:
delete array.remove(index);
cArray has no iterator, but it is easy to loop through all the indices with an integer variable. The size() member function returns the largest index plus one.
for (int i = 0; i < array.size(); i++) { if (array[i]) { // is this position used? cObject *obj = array[i]; EV << obj->getName() << endl; } }
The cTopology class was designed primarily to support routing in communication networks.
A cTopology object stores an abstract representation of the network in a graph form:
One can specify which modules to include in the graph. Compound modules may also be selected. The graph will include all connections among the selected modules. In the graph, all nodes are at the same level; there is no submodule nesting. Connections which span across compound module boundaries are also represented as one graph edge. Graph edges are directed, just as module gates are.
If you are writing a router or switch model, the cTopology graph can help you determine what nodes are available through which gate and also to find optimal routes. The cTopology object can calculate shortest paths between nodes for you.
The mapping between the graph (nodes, edges) and network model (modules, gates, connections) is preserved: one can find the corresponding module for a cTopology node and vice versa.
One can extract the network topology into a cTopology object with a single method call. There are several ways to specify which modules should be included in the topology:
First, you can specify which node types you want to include. The following code extracts all modules of type Router or Host. (Router and Host can be either simple or compound module types.)
cTopology topo; topo.extractByModuleType("Router", "Host", nullptr);
Any number of module types can be supplied; the list must be terminated by nullptr.
A dynamically assembled list of module types can be passed as a nullptr-terminated array of const char* pointers, or in an STL string vector std::vector<std::string>. An example for the former:
cTopology topo; const char *typeNames[3]; typeNames[0] = "Router"; typeNames[1] = "Host"; typeNames[2] = nullptr; topo.extractByModuleType(typeNames);
Second, you can extract all modules which have a certain parameter:
topo.extractByParameter("ipAddress");
You can also specify that the parameter must have a certain value for the module to be included in the graph:
cMsgPar yes = "yes"; topo.extractByParameter("includeInTopo", &yes);
The third form allows you to pass a function which can determine for each module whether it should or should not be included. You can have cTopology pass supplemental data to the function through a void* pointer. An example which selects all top-level modules (and does not use the void* pointer):
int selectFunction(cModule *mod, void *) { return mod->getParentModule() == getSimulation()->getSystemModule(); } topo.extractFromNetwork(selectFunction, nullptr);
A cTopology object uses two types: cTopology::Node for nodes and cTopology::Link for edges. (cTopology::LinkIn and cTopology::LinkOut are aliases for cTopology::Link; we'll talk about them later.)
Once you have the topology extracted, you can start exploring it. Consider the following code (we'll explain it shortly):
for (int i = 0; i < topo.getNumNodes(); i++) { cTopology::Node *node = topo.getNode(i); EV << "Node i=" << i << " is " << node->getModule()->getFullPath() << endl; EV << " It has " << node->getNumOutLinks() << " conns to other nodes\n"; EV << " and " << node->getNumInLinks() << " conns from other nodes\n"; EV << " Connections to other modules are:\n"; for (int j = 0; j < node->getNumOutLinks(); j++) { cTopology::Node *neighbour = node->getLinkOut(j)->getRemoteNode(); cGate *gate = node->getLinkOut(j)->getLocalGate(); EV << " " << neighbour->getModule()->getFullPath() << " through gate " << gate->getFullName() << endl; } }
The getNumNodes() member function returns the number of nodes in the graph, and getNode(i) returns a pointer to the ith node, a cTopology::Node structure.
The correspondence between a graph node and a module can be obtained by getNodeFor() method:
cTopology::Node *node = topo.getNodeFor(module); cModule *module = node->getModule();
The getNodeFor() member function returns a pointer to the graph node for a given module. (If the module is not in the graph, it returns nullptr). getNodeFor() uses binary search within the cTopology object so it is relatively fast.
cTopology::Node's other member functions let you determine the connections of this node: getNumInLinks(), getNumOutLinks() return the number of connections, getLinkIn(i) and getLinkOut(i) return pointers to graph edge objects.
By calling member functions of the graph edge object, you can determine the modules and gates involved. The getRemoteNode() function returns the other end of the connection, and getLocalGate(), getRemoteGate(), getLocalGateId() and getRemoteGateId() return the gate pointers and ids of the gates involved. (Actually, the implementation is a bit tricky here: the same graph edge object cTopology::Link is returned either as cTopology::LinkIn or as cTopology::LinkOut so that “remote” and “local” can be correctly interpreted for edges of both directions.)
The real power of cTopology is in finding shortest paths in the network to support optimal routing. cTopology finds shortest paths from all nodes to a target node. The algorithm is computationally inexpensive. In the simplest case, all edges are assumed to have the same weight.
A real-life example assumes we have the target module pointer; finding the shortest path to the target looks like this:
cModule *targetmodulep =...; cTopology::Node *targetnode = topo.getNodeFor(targetmodulep); topo.calculateUnweightedSingleShortestPathsTo(targetnode);
This performs the Dijkstra algorithm and stores the result in the cTopology object. The result can then be extracted using cTopology and cTopology::Node methods. Naturally, each call to calculateUnweightedSingleShortestPathsTo() overwrites the results of the previous call.
Walking along the path from our module to the target node:
cTopology::Node *node = topo.getNodeFor(this); if (node == nullptr) { EV << "We (" << getFullPath() << ") are not included in the topology.\n"; } else if (node->getNumPaths()==0) { EV << "No path to destination.\n"; } else { while (node != topo.getTargetNode()) { EV << "We are in " << node->getModule()->getFullPath() << endl; EV << node->getDistanceToTarget() << " hops to go\n"; EV << "There are " << node->getNumPaths() << " equally good directions, taking the first one\n"; cTopology::LinkOut *path = node->getPath(0); EV << "Taking gate " << path->getLocalGate()->getFullName() << " we arrive in " << path->getRemoteNode()->getModule()->getFullPath() << " on its gate " << path->getRemoteGate()->getFullName() << endl; node = path->getRemoteNode(); } }
The purpose of the getDistanceToTarget() member function of a node is self-explanatory. In the unweighted case, it returns the number of hops. The getNumPaths() member function returns the number of edges which are part of a shortest path, and path(i) returns the ith edge of them as cTopology::LinkOut. If the shortest paths were created by the ...SingleShortestPaths() function, getNumPaths() will always return 1 (or 0 if the target is not reachable), that is, only one of the several possible shortest paths are found. The ...MultiShortestPathsTo() functions find all paths, at increased run-time cost. The cTopology's getTargetNode() function returns the target node of the last shortest path search.
You can enable/disable nodes or edges in the graph. This is done by calling their enable() or disable() member functions. Disabled nodes or edges are ignored by the shortest paths calculation algorithm. The isEnabled() member function returns the state of a node or edge in the topology graph.
One usage of disable() is when you want to determine in how many hops the target node can be reached from our node through a particular output gate. To compute this, you compute the shortest paths to the target from the neighbor node while disabling the current node to prevent the shortest paths from going through it:
cTopology::Node *thisnode = topo.getNodeFor(this); thisnode->disable(); topo.calculateUnweightedSingleShortestPathsTo(targetnode); thisnode->enable(); for (int j = 0; j < thisnode->getNumOutLinks(); j++) { cTopology::LinkOut *link = thisnode->getLinkOut(i); EV << "Through gate " << link->getLocalGate()->getFullName() << " : " << 1 + link->getRemoteNode()->getDistanceToTarget() << " hops" << endl; }
In the future, other shortest path algorithms will also be implemented:
unweightedMultiShortestPathsTo(cTopology::Node *target); weightedSingleShortestPathsTo(cTopology::Node *target); weightedMultiShortestPathsTo(cTopology::Node *target);
cTopology also has methods that let one manipulate the stored graph, or even, build a graph from scratch. These methods are addNode(), deleteNode(), addLink() and deleteLink().
When extracting the topology from the network, cTopology uses the factory methods createNode() and createLink() to instantiate the node and link objects. These methods may be overridden by subclassing cTopology if the need arises, for example when it is useful to be able to store additional information in those objects.
Since version 4.3, OMNeT++ contains two utility classes for pattern matching, cPatternMatcher and cMatchExpression.
cPatternMatcher is a glob-style pattern matching class, adopted to special OMNeT++ requirements. It recognizes wildcards, character ranges and numeric ranges, and supports options such as case sensitive and whole string matching. cMatchExpression builds on top of cPatternMatcher and extends it in two ways: first, it lets you combine patterns with AND, OR, NOT into boolean expressions, and second, it applies the pattern expressions to objects instead of text. These classes are especially useful for making model-specific configuration files more concise or more powerful by introducing patterns.
cPatternMatcher holds a pattern string and several option flags, and has a matches() boolean function that determines whether the string passed as argument matches the pattern with the given flags. The pattern and the flags can be set via the constructor or by calling the setPattern() member function.
The pattern syntax is a variation on Unix glob-style patterns. The most apparent differences to globbing rules are the distinction between * and **, and that character ranges should be written with curly braces instead of square brackets; that is, any-letter is expressed as {a-zA-Z} and not as [a-zA-Z], because square brackets are reserved for the notation of module vector indices.
The following option flags are supported:
Patterns may contain the following elements:
Sets and negated sets can contain several character ranges and also enumeration of characters, for example {_a-zA-Z0-9} or {xyzc-f}. To include a hyphen in the set, place it at a position where it cannot be interpreted as character range, for example {a-z-} or {-a-z}. To include a close brace in the set, it must be the first character: {}a-z}, or for a negated set: {^}a-z}. A backslash is always taken as literal backslash (and NOT as escape character) within set definitions. When doing case-insensitive match, avoid ranges that include both alpha and non-alpha characters, because they might cause funny results.
For numeric ranges and numeric index ranges, ranges are inclusive, and both the start and the end of the range are optional; that is, {10..}, {..99} and {..} are all valid numeric ranges (the last one matches any number). Only nonnegative integers can be matched. Caveat: {17..19} will match "a17", "117" and also "963217"!
The cPatternMatcher constructor and the setPattern() member function have similar signatures:
cPatternMatcher(const char *pattern, bool dottedpath, bool fullstring, bool casesensitive); void setPattern(const char *pattern, bool dottedpath, bool fullstring, bool casesensitive);
The matcher function:
bool matches(const char *text);
There are also some more utility functions for printing the pattern, determining whether a pattern contains wildcards, etc.
Example:
cPatternMatcher matcher("**.host[*]", true, true, true); EV << matcher.matches("Net.host[0]") << endl; // -> true EV << matcher.matches("Net.area1.host[0]") << endl; // -> true EV << matcher.matches("Net.host") << endl; // -> false EV << matcher.matches("Net.host[0].tcp") << endl; // -> false
The cMatchExpression class builds on top of cPatternMatcher, and lets one determine whether an object matches a given pattern expression.
A pattern expression consists of elements in the
fieldname =~ pattern syntax; they check whether the string
representation of the given field of the object matches the pattern.
These elements can be combined with the AND, OR, NOT operators, accepted in both lowercase and uppercase. AND has higher precedence than OR, but parentheses can be used to change the evaluation order.
Pattern examples:
The cMatchExpression class has a constructor and setPattern() method similar to those of cPatternMatcher:
cMatchExpression(const char *pattern, bool dottedpath, bool fullstring, bool casesensitive); void setPattern(const char *pattern, bool dottedpath, bool fullstring, bool casesensitive);
However, the matcher function takes a cMatchExpression::Matchable instead of string:
bool matches(const Matchable *object);
This means that objects to be matched must either be subclassed from cMatchExpression::Matchable, or be wrapped into some adapter class that does. cMatchExpression::Matchable is a small abstract class with only a few pure virtual functions:
/** * Objects to be matched must implement this interface */ class SIM_API Matchable { public: /** * Return the default string to match. The returned pointer will not be * cached by the caller, so it is OK to return a pointer to a static buffer. */ virtual const char *getAsString() const = 0; /** * Return the string value of the given attribute, or nullptr if the object * doesn't have an attribute with that name. The returned pointer will not * be cached by the caller, so it is OK to return a pointer to a static buffer. */ virtual const char *getAsString(const char *attribute) const = 0; /** * Virtual destructor. */ virtual ~Matchable() {} };
To be able to match instances of an existing class that is not already a Matchable, one needs to write an adapter class. An adapter class that we can look at as an example is cMatchableString. cMatchableString makes it possible to match strings with a cMatchExpression, and is part of OMNeT++:
/** * Wrapper to make a string matchable with cMatchExpression. */ class cMatchableString : public cMatchExpression::Matchable { private: std::string str; public: cMatchableString(const char *s) {str = s;} virtual const char *getAsString() const {return str.c_str();} virtual const char *getAsString(const char *name) const {return nullptr;} };
An example:
cMatchExpression expr("foo* or bar*", true, true, true); cMatchableString str1("this is a foo"); cMatchableString str2("something else"); EV << expr.matches(&str1) << endl; // -> true EV << expr.matches(&str2) << endl; // -> false
Or, by using temporaries:
EV << expr.matches(&cMatchableString("this is a foo")) << endl; // -> true EV << expr.matches(&cMatchableString("something else")) << endl; // -> false
The NED expr() operator encapsulates a formula in an object form. On the C++ side, the object is an instance of cOwnedDynamicExpression.
The expression can be evaluated using the evalute() method that returns a cValue, or one of typed methods: boolValue(), intValue(), doubleValue(), stringValue(), xmlValue(). But before that, a custom resolver needs to be implemented, and installed using the setResolver(). The resolver subclasses from cDynamicExpression::IResolver, and its methods readVariable(), readMember(), callFunction(), callMethod() determine how to evaluate various constructs in the expression.
There are several statistic and result collection classes:
cStdDev, cHistogram, cPSquare and
cKSplit. They are all derived from the abstract base class
cStatistic; histogram-like classes derive from
cAbstractHistogram.
All classes use the double type for representing observations, and compute all metrics in the same data type (except the observation count, which is int64_t.)
For weighted statistics, weights are also doubles. Being able to handle non-integer weights is important because weighted statistics are often used for computing time averages, e.g. average queue length or average channel utilization.
The cStdDev class is meant to collect summary statistics of observations. If you also need to compute a histogram, use cHistogram (or cKSplit/cPSquare) instead, because those classes already include the functionality of cStdDev.
cStdDev can collect unweighted or weighted statistics. This needs to be decided in the constructor call, and cannot be changed later. Specify true as the second argument for weighted statistics.
cStdDev unweighted("packetDelay"); // unweighted cStdDev weighted("queueLength", true); // weighted
Observations are added to the statistics by using the collect() or the collectweighted() methods. The latter takes two parameters, the value and the weight.
for (double value : values) unweighted.collect(value); for (double value : values2) { double weight = ... weighted.collectWeighted(value, weight); }
Statistics can be obtained from the object with the following methods: getCount(), getMin(), getMax(), getMean(), getStddev(), getVariance().
There are two getter methods that only work for unweighted statistics: getSum() and getSqrSum(). Plain (unweighted) sum and sum of squares are not computed for weighted observations, and it is an error to call these methods in the weighted case.
Other getter methods are primarily meant for weighted statistics: getSumWeights(), getWeightedSum(), getSqrSumWeights(), getWeightedSqrSum(). When called on unweighted statistics, these methods simply assume a weight of 1.0 for all observations.
An example:
EV << "count = " << unweighted.getCount() << endl; EV << "mean = " << unweighted.getMean() << end; EV << "stddev = " << unweighted.getStddev() << end; EV << "min = " << unweighted.getMin() << end; EV << "max = " << unweighted.getMax() << end;
cHistogram is able to represent both uniform and non-uniform bin histograms, and supports both weighted and unweighted observations. The histogram can be modified dynamically: it can be extended with new bins, and adjacent bins can be merged. In addition to the bin values (which mean count in the unweighted case, and sum of weights in the weighted case), the histogram object also keeps the number (or sum of weights) of the lower and upper outliers (“underflows” and “overflows”.)
Setting up and managing the bins based on the collected observations is usually delegated to a strategy object. However, for most use cases, histogram strategies is not something the user needs to be concerned with. The default constructor of cHistogram sets up the histogram with a default strategy that usually produces a good quality histogram without requiring manual configuration or a-priori knowledge about the distribution. For special use cases, there are other histogram strategies, and it is also possible to write new ones.
cHistogram has several constructors variants. Like with cStdDev, it needs to be decided in the constructor call by a boolean argument whether the histogram should collect unweighted (false) or weighted (true) statistics; the default is unweighted. Another argument is a number of bins hint. (The actual number of bins produced might slightly differ, due to dynamic range extensions and bin merging performed by some strategies.)
cHistogram unweighted1("packetDelay"); // unweighted cHistogram unweighted2("packetDelay", 10); // unweighted, with ~10 bins cHistogram weighted1("queueLength", true); // weighted cHistogram weighted2("queueLength", 10, true); // weighted, with ~10 bins
It is also possible to provide a strategy object in a constructor call. (The strategy object may also be set later though, using setStrategy(). It must be called before the first observation is collected.)
cHistogram autoRangeHist("queueLength", new cAutoRangeHistogramStrategy());
This constructor can also be used to create a histogram without a strategy object, which is useful if you want to set up the histogram bins manually.
cHistogram hist("queueLength", nullptr, true); // weighted, no strategy
cHistogram also has methods where you can provide constraints and hints for setting up the bins: setMode(), setRange(), setRangeExtensionFactor(), setAutoExtend(), setNumBinsHint(), setBinSizeHint(). These methods delegate to similar methods of cAutoRangeHistogramStrategy.
Observations are added to the histogram in the same way as with cStdDev: using the collect() and collectWeighted() methods.
Histogram bins can be accessed with three member functions: getNumBins() returns the number of bins, getBinEdge(int k) returns the kth bin edge, getBinValue(int k) returns the count or sum of weights in bin k, and getBinPDF(int k) returns the PDF value in the bin (i.e. between getBinEdge(k) and getBinEdge(k+1)). The getBinInfo(k) method returns multiple bin data (edges, value, relative frequency) packed together in a struct. Four other methods, getUnderflowSumWeights(), getOverflowSumWeights(), getNumUnderflows(), getNumOverflows(), provide access to the outliers.
These functions, being defined on cHistogramBase, are not only available on cHistogram but also for cPSquare and cKSplit.
For cHistogram, bin edges and bin values can also be accessed as a vector of doubles, using the getBinEdges() and getBinValues() methods.
An example:
EV << "[" << hist.getMin() << "," << hist.getBinEdge(0) << "): " << hist.getUnderflowSumWeights() << endl; int numBins = hist.getNumBins(); for (int i = 0; i < numBins; i++) { EV << "[" << hist.getBinEdge(i) << "," << hist.getBinEdge(i+1) << "): " << hist.getBinValue(i) << endl; } EV << "[" << hist.getBinEdge(numBins) << "," << hist.getMax() << "]: " << hist.getOverflowSumWeights() << endl;
The getPDF(x) and getCDF(x) member functions return the value of the Probability Density Function and the Cumulated Density Function at a given x, respectively.
Note that bins may not be immediately available during observation collection, because some histogram strategies use precollection to gather information about the distribution before setting up the bins. Use binsAlreadySetUp() to figure out whether bins are set up already. Setting up the bins can be forced with the setupBins() method.
The cHistogram class has several methods for creating and manipulating bins. These methods are primarily intended to be called from strategy classes, but are also useful if you want to manage the bins manually, i.e. without a strategy class.
For setting up the bins, you can either use createUniformBins() with the range (lo, hi) and the step size as parameters, or specify all bin edges explicitly in a vector of doubles to setBinEdges().
When the bins have already been set up, the histogram can be extended with new bins down or up using the prependBins() and appendBins() methods that take a list of new bin edges to add. There is also an extendBinsTo() method that extends the histogram with equal-sized bins at either end to make sure that a supplied value falls into the histogram range. Of course, extending the histogram is only possible if there are no outliers in that direction. (The positions of the outliers is not preserved, so it is not known how many would fall in each of the newly created bins.)
If the histogram has too many bins, adjacent ones (pairs, triplets, or groups of size n) can be merged, using the mergeBins() method.
Example code which sets up a histogram with uniform bins:
cHistogram hist("queueLength", nullptr); // create w/o strategy object hist.createUniformBins(0, 100, 10); // 10 bins over (0,100)
The following code achieves the same, but uses setBinEdges():
std::vector<double> edges = {0,10,20,30,40,50,60,70,80,90,100}; // C++11 cHistogram hist("queueLength", nullptr); hist.setBinEdges(edges);
Histogram strategies subclass from cIHistogramStrategy, and are responsible for setting up and managing the bins.
A cHistogram is created with a cDefaultHistogramStrategy by default, which works well in most cases. Other cHistogram constructors allow passing in an arbitrary histogram strategy.
The collect() and collectWeighted() methods of a cHistogram delegate to similar methods of the strategy object, which in turn decides when and how to set up the bins, and how to manage the bins later. (Setting up the bins may be postponed until a few observations have been collected, in order to gather more information for it.) The histogram strategy uses public histogram methods like createUniformBins() to create and manage the bins.
The following histogram strategy classes exist.
cFixedRangeHistogramStrategy sets up uniform bins over a predetermined interval. The number of bins and the histogram mode (integers or reals) also need to be configured. This strategy does not use precollection, as all input for setting up the bins must be explicitly provided by the user.
cDefaultHistogramStrategy is used by the default setup of cHistogram. This strategy uses precollection to gather input information about the distribution before setting up the bins. Precollection is used to determine the initial histogram range and the histogram mode (integers vs. reals). In integers mode, bin edges will be whole numbers.
To keep up with distributions that change over time, this histogram strategy can auto-extend the histogram range by adding new bins as needed. It also performs bin merging when necessary, to keep the number of bins reasonably low.
cAutoRangeHistogramStrategy is a generic, very configurable, precollection-based histogram strategy which creates uniform bins, and creates quality histograms for practical distributions.
Several constraints and hints can be specified for setting up the bins: range lower and/or upper endpoint, bin size, number of bins, mode (integers vs. reals), and whether bin size rounding is to be used.
This histogram strategy can auto-extend the histogram range by adding new bins at either end. One can also set up an upper limit to the number of histogram bins to prevent it from growing indefinitely. Bin merging can also be enabled: it will cause every two (or N) adjacent bins to be merged to reduce the number of bins if their number grows too high.
The draw() member function generates random numbers from the distribution stored by the object:
double rnd = histogram.draw();
The statistic classes have loadFromFile() member functions that read the histogram data from a text file. If you need a custom distribution that cannot be written (or it is inefficient) as a C++ function, you can describe it in histogram form stored in a text file, and use a histogram object with loadFromFile().
You can also use saveToFile() that writes out the distribution collected by the histogram object:
FILE *f = fopen("histogram.dat","w"); histogram.saveToFile(f); // save the distribution fclose(f); cHistogram restored; FILE *f2 = fopen("histogram.dat","r"); restored.loadFromFile(f2); // load stored distribution fclose(f2);
The cPSquare class implements the P2 algorithm described in [JCh85]. P2 is a heuristic algorithm for dynamic calculation of the median and other quantiles. The estimates are produced dynamically as the observations arrive. The observations are not stored; therefore, the algorithm has a very small and fixed storage requirement regardless of the number of observations. The P2 algorithm operates by adaptively shifting bin edges as observations arrive.
cPSquare only needs the number of cells, for example in the constructor:
cPSquare psquare("endToEndDelay", 20);
Afterwards, observations can be added and the resulting histogram can be queried with the same cAbstractHistogram methods as with cHistogram.
The k-split algorithm is an on-line distribution estimation method. It was designed for on-line result collection in simulation programs. The method was proposed by Varga and Fakhamzadeh in 1997. The primary advantage of k-split is that without having to store the observations, it gives a good estimate without requiring a-priori information about the distribution, including the sample size. The k-split algorithm can be extended to multi-dimensional distributions, but here we deal with the one-dimensional version only.
The k-split algorithm is an adaptive histogram-type estimate which maintains a good partitioning by doing cell splits. We start out with a histogram range [xlo, xhi) with k equal-sized histogram cells with observation counts n1,n2, .. nk. Each collected observation increments the corresponding observation count. When an observation count ni reaches a split threshold, the cell is split into k smaller, equal-sized cells with observation counts ni,1, ni,2, .. ni,k initialized to zero. The ni observation count is remembered and is called the mother observation count to the newly created cells. Further observations may cause cells to be split further (e.g. ni,1,1,...ni,1,k etc.), thus creating a k-order tree of observation counts where leaves contain live counters that are actually incremented by new observations, and intermediate nodes contain mother observation counts for their children. If an observation falls outside the histogram range, the range is extended in a natural manner by inserting new level(s) at the top of the tree. The fundamental parameter to the algorithm is the split factor k. Experience has shown that k=2 works best.
For density estimation, the total number of observations that fell into each cell of the partition has to be determined. For this purpose, mother observations in each internal node of the tree must be distributed among its child cells and propagated up to the leaves.
Let n...,i be the (mother) observation count for a cell, s...,i be the total observation count in a cell n...,i plus the observation counts in all its sub-, sub-sub-, etc. cells), and m...,i the mother observations propagated to the cell. We are interested in the ñ...,i = n...,i + m...,i estimated amount of observations in the tree nodes, especially in the leaves. In other words, if we have ñ...,i estimated observation amount in a cell, how to divide it to obtain m...,i,1, m...,i,2 .. m...,i,k that can be propagated to child cells. Naturally, m...,i,1 + m...,i,2 + .. + m...,i,k = ñ...,i.
Two natural distribution methods are even distribution (when m...,i,1 = m...,i,2 = .. = m...,i,k) and proportional distribution (when m...,i,1 : m...,i,2 : .. : m...,i,k = s...,i,1 : s...,i,2 : .. : s...,i,k). Even distribution is optimal when the s...,i,j values are very small, and proportional distribution is good when the s...,i,j values are large compared to m...,i,j. In practice, a linear combination of them seems appropriate, where λ=0 means even and λ=1 means proportional distribution:
m..,i,j = (1-λ)ñ..,i/k + λ ñ..,i s...,i,j / s..,i where λ is in [0,1]
Note that while n...,i are integers, m...,i and thus
ñ...,i are typically real numbers. The histogram estimate
calculated from k-split is not exact, because the frequency
counts calculated in the above manner contain a degree of estimation
themselves. This introduces a certain cell division error;
the λ parameter should be selected so that it minimizes that
error. It has been shown that the cell division error can
be reduced to a more-than-acceptable small value.
Strictly speaking, the k-split algorithm is semi-online,
because its needs some observations to set up the initial histogram
range. Because of the range extension and cell split
capabilities, the algorithm is not very sensitive to the choice of the
initial range, so very few observations are sufficient for range
estimation (say Npre=10). Thus we can regard k-split as
an on-line method.
K-split can also be used in semi-online mode, when the algorithm is only used to create an optimal partition from a larger number of Npre observations. When the partition has been created, the observation counts are cleared and the Npre observations are fed into k-split once again. This way all mother (non-leaf) observation counts will be zero and the cell division error is eliminated. It has been shown that the partition created by k-split can be better than both the equi-distant and the equal-frequency partition.
OMNeT++ contains an implementation of the k-split algorithm, the cKSplit class.
The cKSplit class is an implementation of the k-split method. It is a subclass of cAbstractHistogram, so configuring, adding observations and querying histogram cells is done the same way as with other histogram classes.
Specific member functions allow one to fine-tune the k-split algorithm. setCritFunc() and setDivFunc() let one replace the split criteria and the cell division function, respectively. setRangeExtension() lets one enable/disable range extension. (If range extension is disabled, out-of-range observations will simply be counted as underflows or overflows.)
The class also allows one to access the k-split data structure, directly, via methods like getTreeDepth(), getRootGrid(), getGrid(i), and others.
Objects of type cOutVector are responsible for writing time series data (referred to as output vectors) to a file. The record() method is used to output a value (or a value pair) with a timestamp. The object name will serve as the name of the output vector.
The vector name can be passed in the constructor,
cOutVector responseTimeVec("response time");
but in the usual arrangement you'd make the cOutVector a member of the module class and set the name in initialize(). You'd record values from handleMessage() or from a function called from handleMessage().
The following example is a Sink module which records the lifetime of every message that arrives to it.
class Sink : public cSimpleModule { protected: cOutVector endToEndDelayVec; virtual void initialize(); virtual void handleMessage(cMessage *msg); }; Define_Module(Sink); void Sink::initialize() { endToEndDelayVec.setName("End-to-End Delay"); } void Sink::handleMessage(cMessage *msg) { simtime_t eed = simTime() - msg->getCreationTime(); endToEndDelayVec.record(eed); delete msg; }
There is also a recordWithTimestamp() method, to make it possible to record values into output vectors with a timestamp other than simTime(). Increasing timestamp order is still enforced though.
All cOutVector objects write to a single output vector file
that has a file extension .vec.
You can configure output vectors from omnetpp.ini: you can disable individual vectors, or limit recording to certain simulation time intervals (see sections [12.2.2], [12.2.5]).
If the output vector object is disabled or the simulation time is outside the specified interval, record() doesn't write anything to the output file. However, if you have a Qtenv inspector window open for the output vector object, the values will be displayed there, regardless of the state of the output vector object.
While output vectors are to record time series data and thus they typically record a large volume of data during a simulation run, output scalars are supposed to record a single value per simulation run. You can use output scalars
Output scalars are recorded with the record() method of cSimpleModule, and you will usually want to insert this code into the finish() function. An example:
void Transmitter::finish() { double avgThroughput = totalBits / simTime(); recordScalar("Average throughput", avgThroughput); }
You can record whole statistic objects by calling their record() methods, declared as part of cStatistic. In the following example we create a Sink module which calculates the mean, standard deviation, minimum and maximum values of a variable, and records them at the end of the simulation.
class Sink : public cSimpleModule { protected: cStdDev eedStats; virtual void initialize(); virtual void handleMessage(cMessage *msg); virtual void finish(); }; Define_Module(Sink); void Sink::initialize() { eedStats.setName("End-to-End Delay"); } void Sink::handleMessage(cMessage *msg) { simtime_t eed = simTime() - msg->getCreationTime(); eedStats.collect(eed); delete msg; } void Sink::finish() { recordScalar("Simulation duration", simTime()); eedStats.record(); }
The above calls record the data into an output scalar file, a line-oriented text file that has the file extension .sca. The format and processing of output vector files is described in chapter [12].
Unfortunately, variables of type int, long, double do not show up by default in Qtenv; neither do STL classes (std::string, std::vector, etc.) or your own structs and classes. This is because the simulation kernel, being a library, knows nothing about types and variables in your source code.
OMNeT++ provides WATCH() and a set of other macros to allow variables to be inspectable in Qtenv and to be output into the snapshot file. WATCH() macros are usually placed into initialize() (to watch instance variables) or to the top of the activity() function (to watch its local variables); the point being that they should only be executed once.
long packetsSent; double idleTime; WATCH(packetsSent); WATCH(idleTime);
Of course, members of classes and structs can also be watched:
WATCH(config.maxRetries);
The Qtenv runtime environment lets you inspect and also change the values of inspected variables.
The WATCH() macro can be used with any type that has a stream output operator (operator<<) defined. By default, this includes all primitive types and std::string, but since you can write operator<< for your classes/structs and basically any type, WATCH() can be used with anything. The only limitation is that since the output should more or less fit on single line, the amount of information that can be conveniently displayed is limited.
An example stream output operator:
std::ostream& operator<<(std::ostream& os, const ClientInfo& cli) { os << "addr=" << cli.clientAddr << " port=" << cli.clientPort; // no endl! return os; }
And the WATCH() line:
WATCH(currentClientInfo);
Watches for primitive types and std::string allow for changing the value from the GUI as well, but for other types you need to explicitly add support for that. What you need to do is define a stream input operator (operator>>) and use the WATCH_RW() macro instead of WATCH().
The stream input operator:
std::ostream& operator>>(std::istream& is, ClientInfo& cli) { // read a line from "is" and parse its contents into "cli" return is; }
And the WATCH_RW() line:
WATCH_RW(currentClientInfo);
WATCH() and WATCH_RW() are basic watches; they allow one line of (unstructured) text to be displayed. However, if you have a data structure generated from message definitions (see Chapter [5]), then there is a better approach. The message compiler automatically generates meta-information describing individual fields of the class or struct, which makes it possible to display the contents on field level.
The WATCH macros to be used for this purpose are WATCH_OBJ() and WATCH_PTR(). Both expect the object to be subclassed from cObject; WATCH_OBJ() expects a reference to such class, and WATCH_PTR() expects a pointer variable.
ExtensionHeader hdr; ExtensionHeader *hdrPtr; ... WATCH_OBJ(hdr); WATCH_PTR(hdrPtr);
CAUTION: With WATCH_PTR(), the pointer variable must point to a valid object or be nullptr at all times, otherwise the GUI may crash while trying to display the object. This practically means that the pointer should be initialized to nullptr even if not used, and should be set to nullptr when the object to which it points is deleted.
delete watchedPtr; watchedPtr = nullptr; // set to nullptr when object gets deleted
The standard C++ container classes (vector, map, set, etc) also have structured watches, available via the following macros:
WATCH_VECTOR(), WATCH_PTRVECTOR(), WATCH_LIST(), WATCH_PTRLIST(), WATCH_SET(), WATCH_PTRSET(), WATCH_MAP(), WATCH_PTRMAP().
The PTR-less versions expect the data items ("T") to have stream output operators (operator <<), because that is how they will display them. The PTR versions assume that data items are pointers to some type which has operator <<. WATCH_PTRMAP() assumes that only the value type (“second”) is a pointer, the key type (“first”) is not. (If you happen to use pointers as key, then define operator << for the pointer type itself.)
Examples:
std::vector<int> intvec; WATCH_VECTOR(intvec); std::map<std::string,Command*> commandMap; WATCH_PTRMAP(commandMap);
The snapshot() function outputs textual information about all or selected objects of the simulation (including the objects created in module functions by the user) into the snapshot file.
bool snapshot(cObject *obj=nullptr, const char *label=nullptr);
The function can be called from module functions, like this:
snapshot(); // dump the network snapshot(this); // dump this simple module and all its objects snapshot(getSimulation()->getFES()); // dump the future events set
snapshot() will append to the end of the snapshot file. The snapshot file name has an extension of .sna.
The snapshot file output is detailed enough to be used for debugging the simulation: by regularly calling snapshot(), one can trace how the values of variables, objects changed over the simulation. The arguments: label is a string that will appear in the output file; obj is the object whose inside is of interest. By default, the whole simulation (all modules etc) will be written out.
If you run the simulation with Qtenv, you can also create a snapshot from the menu.
An example snapshot file (some abbreviations have been applied):
<?xml version="1.0" encoding="ISO-8859-1"?> <snapshot object="simulation" label="Long queue" simtime="9.038229311343" network="FifoNet"> <object class="cSimulation" fullpath="simulation"> <info></info> <object class="cModule" fullpath="FifoNet"> <info>id=1</info> <object class="fifo::Source" fullpath="FifoNet.gen"> <info>id=2</info> <object class="cPar" fullpath="FifoNet.gen.sendIaTime"> <info>exponential(0.01s)</info> </object> <object class="cGate" fullpath="FifoNet.gen.out"> <info>--> fifo.in</info> </object> </object> <object class="fifo::Fifo" fullpath="FifoNet.fifo"> <info>id=3</info> <object class="cPar" fullpath="FifoNet.fifo.serviceTime"> <info>0.01</info> </object> <object class="cGate" fullpath="FifoNet.fifo.in"> <info><-- gen.out</info> </object> <object class="cGate" fullpath="FifoNet.fifo.out"> <info>--> sink.in</info> </object> <object class="cQueue" fullpath="FifoNet.fifo.queue"> <info>length=13</info> <object class="cMessage" fullpath="FifoNet.fifo.queue.job"> <info>src=FifoNet.gen (id=2) dest=FifoNet.fifo (id=3)</info> </object> <object class="cMessage" fullpath="FifoNet.fifo.queue.job"> <info>src=FifoNet.gen (id=2) dest=FifoNet.fifo (id=3)</info> </object> </object> <object class="fifo::Sink" fullpath="FifoNet.sink"> <info>id=4</info> <object class="cGate" fullpath="FifoNet.sink.in"> <info><-- fifo.out</info> </object> </object> </object> <object class="cEventHeap" fullpath="simulation.scheduled-events"> <info>length=3</info> <object class="cMessage" fullpath="simulation.scheduled-events.job"> <info>src=FifoNet.fifo (id=3) dest=FifoNet.sink (id=4)</info> </object> <object class="cMessage" fullpath="...sendMessageEvent"> <info>at T=9.0464.., in dt=0.00817..; selfmsg for FifoNet.gen (id=2)</info> </object> <object class="cMessage" fullpath="...end-service"> <info>at T=9.0482.., in dt=0.01; selfmsg for FifoNet.fifo (id=3)</info> </object> </object> </object> </snapshot>
It is important to choose the correct stack size for modules. If the stack is too large, it unnecessarily consumes memory; if it is too small, stack violation occurs.
OMNeT++ contains a mechanism that detects stack overflows. It checks the intactness of a predefined byte pattern (0xdeadbeef) at the stack boundary, and reports “stack violation” if it was overwritten. The mechanism usually works fine, but occasionally it can be fooled by large -- and not fully used -- local variables (e.g. char buffer[256]): if the byte pattern happens to fall in the middle of such a local variable, it may be preserved intact and OMNeT++ does not detect the stack violation.
To be able to make a good guess about stack size, you can use the getStackUsage() call which tells you how much stack the module actually uses. It is most conveniently called from finish():
void FooModule::finish() { EV << getStackUsage() << " bytes of stack used\n"; }
The value includes the extra stack added by the user interface library
(see extraStackforEnvir in
envir/envirbase.h), which is currently 8K for Cmdenv and at least 80K
for Qtenv.
getStackUsage() also works by checking the existence of predefined byte patterns in the stack area, so it is also subject to the above effect with local variables.
It is possible to extend the NED language with new functions beyond the built-in ones. New functions are implemented in C++, and then compiled into the simulation model. When a simulation program starts up, the new functions are registered in the NED runtime, and become available for use in NED and ini files.
There are two methods to define NED functions. The
Define_NED_Function() macro is the more flexible, preferred method
of the two. Define_NED_Math_Function() is the older one, and it
supports only certain cases. Both macros have several variants.
The Define_NED_Function() macro lets you define new functions that can accept arguments of various data types (bool, double, string, etc.), supports optional arguments and also variable argument lists (variadic functions).
The macro has two variants:
Define_NED_Function(FUNCTION,SIGNATURE); Define_NED_Function2(FUNCTION,SIGNATURE,CATEGORY,DESCRIPTION);
The two variants are basically equivalent; the only difference is that the second one allows you to specify two more parameters, CATEGORY and DESCRIPTION. These two parameters expect human-readable strings that are displayed when listing the available NED functions.
The common parameters, FUNCTION and SIGNATURE are the important ones. FUNCTION is the name of (or pointer to) the C++ function that implements the NED function, and SIGNATURE is the function signature as a string; it defines the name, argument types and return type of the NED function.
You can list the available NED functions by running opp_run or any simulation executable with the -h nedfunctions option. The result will be similar to what you can see in Appendix [22].
$ opp_run -h nedfunctions OMNeT++ Discrete Event Simulation... Functions that can be used in NED expressions and in omnetpp.ini: Category "conversion": double : double double(any x) Converts x to double, and returns the result. A boolean argument becomes 0 or 1; a string is interpreted as number; an XML argument causes an error. ...
Seeing the above output, it should now be obvious what the CATEGORY and DESCRIPTION macro arguments are for. OMNeT++ uses the following category names: "conversion", "math", "misc", "ned", "random/continuous", "random/discrete", "strings", "units", "xml". You can use these category names for your own functions as well, when appropriate.
The signature string has the following syntax:
returntype functionname(argtype1 argname1, argtype2 argname2, ...)
The functionname part defines the name of the NED function, and it must meet the syntactical requirements for NED identifiers (start with a letter or underscore, not be a reserved NED keyword, etc.)
The argument types and return type can be one of the following: bool, int (maps to C/C++ long), double, quantity, string, xml or any; that is, any NED parameter type plus quantity and any. quantity means double with an optional measurement unit (double and int only accept dimensionless numbers), and any stands for any type. The argument names are presently ignored.
To make arguments optional, append a question mark to the argument name. Like in C++, optional arguments may only occur at the end of the argument list, i.e. all arguments after an optional argument must also be optional. The signature string does not have syntax for supplying default values for optional arguments; that is, default values have to be built into the C++ code that implements the NED function. To let the NED function accept any number of additional arguments of arbitrary types, add an ellipsis (...) to the signature as the last argument.
Some examples:
"int factorial(int n)" "bool isprime(int n)" "double sin(double x)" "string repeat(string what, int times)" "quantity uniform(quantity a, quantity b, long rng?)" "any choose(int index, ...)"
The first three examples define NED functions with the names factorial, isprime and sin, with the obvious meanings. The fourth example can be the signature for a function that repeats a string n times, and returns the concatenated result. The fifth example is the signature of the existing uniform() NED function; it accepts numbers both with and without measurement units (of course, when invoked with measurement units, both a and b must have one, and the two must be compatible -- this should be checked by the C++ implementation). uniform() also accepts an optional third argument, an RNG index. The sixth example can be the signature of a choose() NED function that accepts an integer plus any number of additional arguments of any type, and returns the indexth one among them.
The C++ function that implements the NED function must have one of the following signatures, as defined by the NedFunction and NedFunctionExt typedefs:
cValue function(cComponent *context, cValue argv[], int argc); cValue function(cExpression::Context *context, cValue argv[], int argc);
As you can see, the function accepts an array of cValue objects, and returns a cValue; the argc-argv style argument list should be familiar to you from the declaration of the C/C++ main() function. cValue is a class that is used during the evaluation of NED expressions, and represents a value together with its type. The context argument contains the module or channel in the context of which the NED expression is being evaluated; it is useful for implementing NED functions like getParentModuleIndex().
The function implementation does not need to worry too much about checking the number and types of the incoming arguments, because the NED expression evaluator already does that: inside the function you can be sure that the number and types of arguments correspond to the function signature string. Thus, argc is mostly useful only if you have optional arguments or a variable argument list. The NED expression evaluator also checks that the value you return from the function corresponds to the signature.
cValue can store all the needed data types (bool, double, string, etc.), and is equipped with the functions necessary to conveniently read and manipulate the stored value. The value can be read via functions like boolValue(), intValue(), doubleValue(), stringValue() (returns const char *), stdstringValue() (returns const std::string&) and xmlValue() (returns cXMLElement*), or by simply casting the object to the desired data type, making use of the provided typecast operators. Invoking a getter or typecast operator that does not match the stored data type will result in a runtime error. For setting the stored value, cValue provides a number of overloaded set() functions, assignment operators and constructors.
Further cValue member functions provide access to the stored data type; yet other functions are associated with handling quantities, i.e. doubles with measurement units. There are member functions for getting and setting the number part and the measurement unit part separately; for setting the two components together; and for performing unit conversion.
Equipped with the above information, we can already write a simple NED function that returns the length of a string:
static cValue ned_strlen(cComponent *context, cValue argv[], int argc) { return (long)argv[0].stdstringValue().size(); } Define_NED_Function(ned_strlen, "int length(string s)");
Note that since Define_NED_Function() expects the C++ function to be already declared, we place the function implementation in front of the Define_NED_Function() line. We also declare the function to be static, because its name doesn't need to be visible for the linker. In the function body, we use std::string's size() method to obtain the length of the string, and cast the result to long; the C++ compiler will convert that into a cValue using cValue's long constructor. Note that the int keyword in the signature maps to the C++ type long.
The following example defines a choose() NED function that returns its kth argument that follows the index (k) argument.
static cValue ned_choose(cComponent *context, cValue argv[], int argc) { int index = (int)argv[0]; if (index < 0 || index >= argc-1) throw cRuntimeError("choose(): index %d is out of range", index); return argv[index+1]; } Define_NED_Function(ned_choose, "any choose(int index, ...)");
Here, the value of argv[0] is read using the typecast operator that maps to intValue(). (Note that if the value of the index argument does not fit into an int, the conversion will result in data loss!) The code also shows how to report errors (by throwing a cRuntimeError.)
The third example shows how the built-in uniform() NED function could be reimplemented by the user:
static cValue ned_uniform(cComponent *context, cValue argv[], int argc) { int rng = argc==3 ? (int)argv[2] : 0; double argv1converted = argv[1].doubleValueInUnit(argv[0].getUnit()); double result = uniform((double)argv[0], argv1converted, rng); return cValue(result, argv[0].getUnit()); // or: argv[0].setPreservingUnit(result); return argv[0]; } Define_NED_Function(ned_uniform, "quantity uniform(quantity a, quantity b, int rng?)");
The first line of the function body shows how to supply default values for optional arguments; for the rng argument in this case. The next line deals with unit conversion. This is necessary because the a and b arguments are both quantities and may come in with different measurement units. We use the doubleValueInUnit() function to obtain the numeric value of b in a's measurement unit. If the two units are incompatible or only one of the parameters have a unit, an error will be raised. If neither parameters have a unit, doubleValueInUnit() simply returns the stored double. Then we call the uniform() C++ function to actually generate a random number, and return it in a temporary object with a's measurement unit. Alternatively, we could have overwritten the numeric part of a with the result using setPreservingUnit(), and returned just that. If there is no measurement unit, getUnit() will return nullptr, which is understood by both doubleValueInUnit() and the cValue constructor.
In the previous section we have given an overview and demonstrated the basic use of the cValue class; here we go into further details.
The stored data type can be obtained with the getType() function. It returns an enum (cValue::Type) that has the following values: UNDEF, BOOL, INT, DOUBLE, STRING, XML. UNDEF is synonymous with unset; the others correspond to data types: bool, int64_t, double, const char * (std::string), cXMLElement. There is no separate QUANTITY type: quantities are also represented with the DOUBLE type, which has an optional associated measurement unit.
The getTypeName() static function returns the string equivalent of a cValue::Type. The utility function isSet() returns true if the type is different from UNDEF; isNumeric() returns true if the type is INT or DOUBLE.
cValue value = 5.0; cValue::Type type = value.getType(); // ==> DOUBLE EV << cValue::getTypeName(type) << endl; // ==> "double"
We have already seen that the DOUBLE type serves both the double and quantity types of the NED function signature, by storing an optional measurement unit (a string) in addition to the double variable. A cValue can be set to a quantity by creating it with a two-argument constructor that accepts a double and a const char * for unit, or by invoking a similar two-argument set() function. The measurement unit can be read with getUnit(), and overwritten with setUnit(). If you assign a double to a cValue or invoke the one-argument set(double) method on it, that will clear the measurement unit. If you want to overwrite the number part but preserve the original unit, you need to use the setPreservingUnit(double) method.
There are several functions that perform unit conversion. The doubleValueInUnit() method accepts a measurement unit, and attempts to return the number in that unit. The convertTo() method also accepts a measurement unit, and tries to permanently convert the value to that unit; that is, if successful, it changes both the number and the measurement unit part of the object. The convertUnit() static cValue member function accepts three arguments: a quantity as a double and a unit, and a target unit; and returns the number in the target unit. A parseQuantity() static member function parses a string that contains a quantity (e.g. "5min 48s"), and return both the numeric value and the measurement unit. Another version of parseQuantity() tries to return the value in a unit you specify. All functions raise an error if the unit conversion is not possible, e.g. due to incompatible units.
For performance reasons, setUnit(), convertTo() and all other functions that accept and store a measurement unit will only store the const char* pointer, but do not copy the string itself. Consequently, the passed measurement unit pointers must stay valid for at least the lifetime of the cValue object, or even longer if the same pointer propagates to other cValue objects. It is recommended that you only pass pointers that stay valid during the entire simulation. It is safe to use: (1) string constants from the code; (2) unit strings from other cValues; and (3) pooled strings e.g. from a cStringPool or from cValue's static getPooled() function.
Example code:
// manipulating the number and the measurement unit cValue value(250,"ms"); // initialize to 250ms value = 300.0; // ==> 300 (clears the unit!) value.set(500,"ms"); // ==> 500ms value.setUnit("s"); // ==> 500s (overwrites the unit) value.setPreservingUnit(180); // ==> 180s (overwrites the number) value.setUnit(nullptr); // ==> 180 (clears the unit) // unit conversion value.set(500, "ms"); // ==> 500ms value.convertTo("s"); // ==> 0.5s double us = value.doubleValueInUnit("us"); // ==> 500000 (value is unchanged) double bps = cValue::convertUnit(128, "kbps", "bps"); // ==> 128000 double ms = cValue::convertUnit("2min 15.1s", "ms"); // ==> 135100 // getting persistent measurement unit strings const char *unit = argv[0].stringValue(); // cannot be trusted to persist value.setUnit(cValue::getPooled(unit)); // use a persistent copy for setUnit()
The Define_NED_Math_Function() macro lets you register a C/C++ “mathematical” function as a NED function. The registered C/C++ function may take up to four double arguments, and must return a double; the NED signature will be the same. In other words, functions registered this way cannot accept any NED data type other than double; cannot return anything else than double; cannot accept or return values with measurement units; cannot have optional arguments or variable argument lists; and are restricted to four arguments at most. In exchange for these restrictions, the C++ implementation of the functions is a lot simpler.
Accepted function signatures for Define_NED_Math_Function():
double f(); double f(double); double f(double, double); double f(double, double, double); double f(double, double, double, double);
The simulation kernel uses Define_NED_Math_Function() to expose commonly used <math.h> functions in the NED language. Most <math.h> functions (sin(), cos(), fabs(), fmod(), etc.) can be directly registered without any need for wrapper code, because their signatures is already one of the accepted ones listed above.
The macro has the following variants:
Define_NED_Math_Function(NAME,ARGCOUNT); Define_NED_Math_Function2(NAME,FUNCTION,ARGCOUNT); Define_NED_Math_Function3(NAME,ARGCOUNT,CATEGORY,DESCRIPTION); Define_NED_Math_Function4(NAME,FUNCTION,ARGCOUNT,CATEGORY,DESCRIPTION);
All macros accept the NAME and ARGCOUNT parameters; they are the intended name of the NED function and the number of double arguments the function takes (0..3). NAME should be provided without quotation marks (they will be added inside the macro.) Two macros also accept a FUNCTION parameter, which is the name of (or pointer to) the implementation C/C++ function. The macros that don't have a FUNCTION parameter simply use the NAME parameter for that as well. The last two macros accept CATEGORY and DESCRIPTION, which have exactly the same role as with Define_NED_Function().
Examples:
Define_NED_Math_Function3(sin, 1, "math", "Trigonometric function; see <math.h>"); Define_NED_Math_Function3(cos, 1, "math", "Trigonometric function; see <math.h>"); Define_NED_Math_Function3(pow, 2, "math", "Power-of function; see <math.h>");
If you plan to implement a completely new class (as opposed to subclassing something already present in OMNeT++), you have to ask yourself whether you want the new class to be based on cObject or not. Note that we are not saying you should always subclass from cObject. Both solutions have advantages and disadvantages, which you have to consider individually for each class.
cObject already carries (or provides a framework for)
significant functionality that is either relevant to
your particular purpose or not. Subclassing cObject
generally means you have more code to write (as you have to
redefine certain virtual functions and adhere to conventions)
and your class will be a bit more heavy-weight.
However, if you need to store your objects in OMNeT++ objects like cQueue
or you want to store OMNeT++ classes in your object,
then you must subclass from cObject.
The most significant features of cObject are the name string (which has to be stored somewhere, so it has its overhead) and ownership management (see section [7.14]), which also provides advantages at some cost.
As a general rule, small struct-like classes like IPAddress or MACAddress are better not subclassed from cObject. If your class has at least one virtual member function, consider subclassing from cObject, which does not impose any extra cost because it doesn't have data members at all, only virtual functions.
Most classes in the simulation class library are descendants of cObject. When deriving a new class from cObject or a cObject descendant, one must redefine certain member functions so that objects of the new class can fully co-operate with the simulation library classes. A list of those methods is presented below.
The following methods must be implemented:
If the new class contains other objects subclassed from cObject, either via pointers or as a data member, the following function should be implemented:
Implementation of the following methods is recommended:
It is customary to implement the copy constructor and the assignment operator so that they delegate to the same function of the base class, and invoke a common private copy() function to copy the local members.
You should also use the Register_Class() macro to register the new class. It is used by the createOne() factory function, which can create any object given the class name as a string. createOne() is used by the Envir library to implement omnetpp.ini options such as rng-class="..." or scheduler-class="...". (see Chapter [17])
For example, an omnetpp.ini entry such as
rng-class = "cMersenneTwister"
would result in something like the following code to be executed for creating the RNG objects:
cRNG *rng = check_and_cast<cRNG*>(createOne("cMersenneTwister"));
But for that to work, we needed to have the following line somewhere in the code:
Register_Class(cMersenneTwister);
createOne() is also needed by the parallel distributed simulation feature (Chapter [16]) to create blank objects to unmarshal into on the receiving side.
We'll go through the details using an example. We create a new class NewClass, redefine all above mentioned cObject member functions, and explain the conventions, rules and tips associated with them. To demonstrate as much as possible, the class will contain an int data member, dynamically allocated non-cObject data (an array of doubles), an OMNeT++ object as data member (a cQueue), and a dynamically allocated OMNeT++ object (a cMessage).
The class declaration is the following. It contains the declarations of all methods discussed in the previous section.
// // file: NewClass.h // #include <omnetpp.h> class NewClass : public cObject { protected: int size; double *array; cQueue queue; cMessage *msg; ... private: void copy(const NewClass& other); // local utility function public: NewClass(const char *name=nullptr, int d=0); NewClass(const NewClass& other); virtual ~NewClass(); virtual NewClass *dup() const; NewClass& operator=(const NewClass& other); virtual void forEachChild(cVisitor *v); virtual std::string str(); };
We'll discuss the implementation method by method. Here is the top of the .cc file:
// // file: NewClass.cc // #include <stdio.h> #include <string.h> #include <iostream.h> #include "newclass.h" Register_Class(NewClass); NewClass::NewClass(const char *name, int sz) : cObject(name) { size = sz; array = new double[size]; take(&queue); msg = nullptr; }
The constructor (above) calls the base class constructor with the name of the object, then initializes its own data members. You need to call take() for cOwnedObject-based data members.
NewClass::NewClass(const NewClass& other) : cObject(other) { size = -1; // needed by copy() array = nullptr; msg = nullptr; take(&queue); copy(other); }
The copy constructor relies on the private copy() function. Note that pointer members have to be initialized (to nullptr or to an allocated object/memory) before calling the copy() function.
You need to call take() for cOwnedObject-based data members.
NewClass::~NewClass() { delete [] array; if (msg->getOwner()==this) delete msg; }
The destructor should delete all data structures the object allocated. cOwnedObject-based objects should only be deleted if they are owned by the object -- details will be covered in section [7.14].
NewClass *NewClass::dup() const { return new NewClass(*this); }
The dup() function is usually just one line, like the one above.
NewClass& NewClass::operator=(const NewClass& other) { if (&other==this) return *this; cOwnedObject::operator=(other); copy(other); return *this; }
The assignment operator (above) first makes sure that will not try to copy the object to itself, because that can be disastrous. If so (that is, &other==this), the function returns immediately without doing anything.
The base class part is copied via invoking the assignment operator of the base class. Then the method copies over the local members using the copy() private utility function.
void NewClass::copy(const NewClass& other) { if (size != other.size) { size = other.size; delete array; array = new double[size]; } for (int i = 0; i < size; i++) array[i] = other.array[i]; queue = other.queue; queue.setName(other.queue.getName()); if (msg && msg->getOwner()==this) delete msg; if (other.msg && other.msg->getOwner()==const_cast<cMessage*>(&other)) take(msg = other.msg->dup()); else msg = other.msg; }
Complexity associated with copying and duplicating the object is concentrated in the copy() utility function.
Data members are copied in the normal C++ way. If the class contains pointers, you will most probably want to make a deep copy of the data where they point, and not just copy the pointer values.
If the class contains pointers to OMNeT++ objects, you need to take ownership into account. If the contained object is not owned then we assume it is a pointer to an “external” object, consequently we only copy the pointer. If it is owned, we duplicate it and become the owner of the new object. Details of ownership management will be covered in section [7.14].
void NewClass::forEachChild(cVisitor *v) { v->visit(queue); if (msg) v->visit(msg); }
The forEachChild() function should call v->visit(obj) for each obj member of the class. See the API Reference for more information about forEachChild().
std::string NewClass::str() { std::stringstream out; out << "data=" << data << ", array[0]=" << array[0]; return out.str(); }
The str() method should produce a concise, one-line string about the object. You should try not to exceed 40-80 characters, since the string will be shown in tooltips and listboxes.
See the virtual functions of cObject and cOwnedObject in the class library reference for more information. The sources of the Sim library (include/, src/sim/) can serve as further examples.
OMNeT++ has a built-in ownership management mechanism which is used for sanity checks, and as part of the infrastructure supporting Qtenv inspectors.
Container classes like cQueue own the objects inserted into them, but this is not limited to objects inserted into a container: every cOwnedObject-based object has an owner all the time. From the user's point of view, ownership is managed transparently. For example, when you create a new cMessage, it will be owned by the simple module. When you send it, it will first be handed over to (i.e. change ownership to) the FES, and, upon arrival, to the destination simple module. When you encapsulate the message in another one, the encapsulating message will become the owner. When you decapsulate it again, the currently active simple module becomes the owner.
The getOwner() method, defined in cObject, returns the owner of the object:
cOwnedObject *o = msg->getOwner(); EV << "Owner of " << msg->getName() << " is: " << << "(" << o->getClassName() << ") " << o->getFullPath() << endl;
The other direction, enumerating the objects owned can be implemented with the forEachChild() method by it looping through all contained objects and checking the owner of each object.
The traditional concept of object ownership is associated with the “right to delete” objects. In addition to that, keeping track of the owner and the list of objects owned also serves other purposes in OMNeT++:
Some examples of programming errors that can be caught by the ownership facility:
For example, the send() and scheduleAt() functions check that the message being sent/scheduled is owned by the module. If it is not, then it signals a programming error: the message is probably owned by another module (already sent earlier?), or currently scheduled, or inside a queue, a message or some other object -- in either case, the module does not have any authority over it. When you get the error message ("not owner of object"), you need to carefully examine the error message to determine which object has ownership of the message, and correct the logic that caused the error.
The above errors are easy to make in the code, and if not detected automatically, they could cause random crashes which are usually very difficult to track down. Of course, some errors of the same kind still cannot be detected automatically, like calling member functions of a message object which has been sent to (and so is currently owned by) another module.
Ownership is managed transparently for the user, but this mechanism has to be supported by the participating classes themselves. It will be useful to look inside cQueue and cArray, because they might give you a hint what behavior you need to implement when you want to use non-OMNeT++ container classes to store messages or other cOwnedObject-based objects.
cArray and cQueue have internal data structures (array and linked list) to store the objects which are inserted into them. However, they do not necessarily own all of these objects. (Whether they own an object or not can be determined from that object's getOwner() pointer.)
The default behaviour of cQueue and cArray is to take ownership of the objects inserted. This behavior can be changed via the takeOwnership flag.
Here is what the insert operation of cQueue (or cArray) does:
The corresponding source code:
void cQueue::insert(cOwnedObject *obj) { // insert into queue data structure ... // take ownership if needed if (getTakeOwnership()) take(obj); }
Here is what the remove family of operations in cQueue (or cArray) does:
After the object was removed from a cQueue/cArray, you may further use it, or if it is not needed any more, you can delete it.
The release ownership phrase requires further explanation. When you remove an object from a queue or array, the ownership is expected to be transferred to the simple module's local objects list. This is accomplished by the drop() function, which transfers the ownership to the object's default owner. getDefaultOwner() is a virtual method defined in cOwnedObject, and its implementation returns the currently executing simple module's local object list.
As an example, the remove() method of cQueue is
implemented like this:
cOwnedObject *cQueue::remove(cOwnedObject *obj) { // remove object from queue data structure ... // release ownership if needed if (obj->getOwner()==this) drop(obj); return obj; }
The concept of ownership is that the owner has the exclusive right and duty to delete the objects it owns. For example, if you delete a cQueue containing cMessages, all messages it contains and owns will also be deleted.
The destructor should delete all data structures the object allocated. From the contained objects, only the owned ones are deleted -- that is, where obj->getOwner()==this.
The ownership mechanism also has to be taken into consideration when a cArray or cQueue object is duplicated (using dup() or the copy constructor.) The duplicate is supposed to have the same content as the original; however, the question is whether the contained objects should also be duplicated or only their pointers taken over to the duplicate cArray or cQueue. A similar question arises when an object is copied using the assignment operator (operator=()).
The convention followed by cArray/cQueue is that only owned objects are copied, and the contained but not owned ones will have their pointers taken over and their original owners left unchanged.
OMNeT++ simulations can be run under graphical user interfaces like Qtenv that offer visualization and animation in addition to interactive execution and other features. This chapter deals with model visualization.
OMNeT++ essentially provides four main tools for defining and enhancing model visualization:
The following sections will cover the above topics in more detail. But first, let us get acquainted with a new cModule virtual method that one can redefine and place visualization-related code into.
Traditionally, when C++ code was needed to enhance visualization, for example to update a displayed status label or to refresh the position of a mobile node, it was embedded in handleMessage() functions, enclosed in if (ev.isGUI()) blocks. This was less than ideal, because the visualization code would run for all events in that module and not just before display updates when it was actually needed. In Express mode, for example, Qtenv would only refresh the display once every second or so, with a large number of events processed between updates, so visualization code placed inside handleMessage() could potentially waste a significant amount of CPU cycles. Also, visualization code embedded in handleMessage() is not suitable for creating smooth animations.
Starting from OMNeT++ version 5.0, visualization code can be placed into a dedicated method. It is called much more economically, that is, exactly as often as needed.
This method is refreshDisplay(), and is declared on cModule as:
virtual void refreshDisplay() const {}
Components that contain visualization-related code are expected to override refreshDisplay(), and move visualization code such as display string manipulation, canvas figure maintenance and OSG scene graph updates into it.
When and how is refreshDisplay() invoked? Generally, right before the GUI performs a display update. With some additional rules, that boils down to the following:
Here is an example of how one would use it:
void FooModule::refreshDisplay() const { // refresh statistics char buf[80]; sprintf(buf, "Sent:%d Rcvd:%d", numSent, numReceived); getDisplayString()->setTagArg("t", 0, buf); // update the mobile node's position Point pos = ... // e.g. invoke a computePosition() method getDisplayString()->setTagArg("p", 0, pos.x); getDisplayString()->setTagArg("p", 1, pos.y); }
One useful accessory to refreshDisplay() is the isExpressMode() method of cEnvir. It returns true if the simulation is running under a GUI in Express mode. Visualization code may check this flag and adapt the visualization accordingly. An example:
if (getEnvir()->isExpressMode()) { // display throughput statistics } else { // visualize current frame transmission }
Overriding refreshDisplay() has several advantages over putting the simulation code into handleMessage(). The first one is clearly performance. When running under Cmdenv, the runtime cost of visualization code is literally zero, and when running in Express mode under Qtenv, it is practically zero because the cost of one update is amortized over several hundred thousand or million events.
The second advantage is also very practical: consistency of the visualization. If the simulation has cross-module dependencies such that an event processed by one module affects the information displayed by another module, with handleMessage()-based visualization the model may have inconsistent visualization until the second module also processes an event and updates its displayed state. With refreshDisplay() this does not happen, because all modules are refreshed together.
The third advantage is separation of concerns. It is generally not a good idea to intermix simulation logic with visualization code, and refreshDisplay() allows one to completely separate the two.
Code in refreshDisplay() should never alter the state of the simulation because that would destroy repeatability, due to the fact that the timing and frequency of refreshDisplay() calls is completely unpredictable from the simulation model's point of view. The fact that the method is declared const gently encourages this behavior.
If visualization code makes use of internal caches or maintains some other mutable state, such data members can be declared mutable to allow refreshDisplay() to change them.
Support for smooth custom animation allows models to visualize their operation using sophisticated animations. The key idea is that the simulation model is called back from the runtime GUI (Qtenv) repeatedly at a reasonable “frame rate,” allowing it to continually update the canvas (2D) and/or the 3D scene to produce fluid animations. Callback means that the refreshDisplay() methods of modules and figures are invoked.
refreshDisplay() knows the animation position from the simulation time and the animation time, a variable also made accessible to the model. If you think about the animation as a movie, animation time is simply the position in seconds in the movie. By default, the movie is played in Qtenv at normal (1x) speed, and then animation time is simply the number of seconds since the start of the movie. The speed control slider in Qtenv's toolbar allows you to play it at higher (2x, 10x, etc.) and lower (0.5x, 0.1x, etc.) speeds; so if you play the movie at 2x speed, animation time will pass twice as fast as real time.
When smooth animation is turned on (more about that later), simulation time progresses in the model (piecewise) linearly. The speed at which the simulation progresses in the movie is called animation speed. Sticking to the movie analogy, when the simulation progresses in the movie 100 times faster than animation time, animation speed is 100.
Certain actions take zero simulation time, but we still want to animate them. Examples of such actions are the sending of a message over a zero-delay link, or a visualized C++ method call between two modules. When these animations play out, simulation is paused and simulation time stays constant until the animation is over. Such periods are called holds.
Smooth animation is a relatively new feature in OMNeT++, and not all simulations need it. Smooth and traditonal, “non-smooth” animation in Qtenv are two distinct modes which operate very differently:
The factor that decides which operation mode is active is the presence of an animation speed. If there is no animation speed, traditional animation is performed; if there is one, smooth animation is done.
The Qtenv GUI has a dialog (Animation Parameters) which displays
the current animation speed, among other things. This dialog allows the
user to check at any time which operation mode is currently active.
Different animation speeds may be appropriate for different animation effects. For example, when animating WiFi traffic where various time slots are on the microsecond scale, an animation speed on the order of 10^-5 might be appropriate; when animating the movement of cars or pedestrians, an animation speed of 1 is a reasonable choice. When several animations requiring different animation speeds occur in the same scene, one solution is to animate the scene using the lowest animation speed so that even the fastest actions can be visually followed by the human viewer.
The solution provided by OMNeT++ for the above problem is the following. Animation speed cannot be controlled explicitly, only requests may be submitted. Several parts of the models may request different animation speeds. The effective animation speed is computed as the minimum of the animation speeds of visible canvases, unless the user interactively overrides it in the UI, for example by imposing a lower or upper limit.
An animation speed requests may be submitted using the setAnimationSpeed()
method of cCanvas.
An example:
cCanvas *canvas = getSystemModule()->getCanvas(); // toplevel canvas canvas->setAnimationSpeed(2.0, this); // one request canvas->setAnimationSpeed(1e-6, macModule); // another request ... canvas->setAnimationSpeed(1.0, this); // overwrite first request canvas->setAnimationSpeed(0, macModule); // cancel second request
In practice, built-in animation effects such as message sending animation also submit their own animation speed requests internally, so they also affect the effective animation speed chosen by Qtenv.
The current effective animation speed can be obtained from the environment of the simulation (cEnvir, see chapter [18] for context):
double animSpeed = getEnvir()->getAnimationSpeed();
Animation time can be accessed like this:
double animTime = getEnvir()->getAnimationTime();
Animation time starts from zero, and monotonically increases with simulation time and also during “holds”.
As mentioned earlier, a hold interval is an interval when only animation takes place, but simulation time does not progress and no events are processed. Hold intervals are intended for animating actions that take zero simulation time.
A hold can be requested with the holdSimulationFor() method of cCanvas, which accepts an animation time delta as parameter. If a hold request is issued when there is one already in progress, the current hold will be extended as needed to incorporate the request. A hold request cannot be cancelled or shrunk.
cCanvas *canvas = getSystemModule()->getCanvas(); // toplevel canvas canvas->holdSimulationFor(0.5); // request a 0.5s (animation time) hold
When rendering frames in refreshDisplay()) during a hold, the code can use animation time to determine the position in the animation. If the code needs to know the animation time elapsed since the start of the hold, it should query and remember the animation time when issuing the hold request.
If the code needs to know the animation time remaining until the end of the hold, it can use the getRemainingAnimationHoldTime() method of cEnvir. Note that this is not necessarily the time remaining from its own hold request, because other parts of the simulation might extend the hold.
If a model implements such full-blown animations for a compound module that OMNeT++'s default animations (message sending/method call animations) become a liability, they can be programmatically turned off for that module with cModule's setBuiltinAnimationsAllowed() method:
// disable animations for the toplevel module cModule *network = getSimulation()->getSystemModule(); network->setBuiltinAnimationsAllowed(false);
Display strings are compact textual descriptions that specify the arrangement and appearance of the graphical representations of modules and connections in graphical user interfaces (currently Qtenv).
Display strings are usually specified in NED's @display property, but it is also possible to modify them programmatically at runtime.
Display strings can be used in the following contexts:
Display strings are specified in @display properties. The property must contain a single string as value. The string should contain a semicolon-separated list of tags. Each tag consists of a key, an equal sign and a comma-separated list of arguments:
@display("p=100,100;b=60,10,rect,blue,black,2")
Tag arguments may be omitted both at the end and inside the parameter list. If an argument is omitted, a sensible default value is used. In the following example, the first and second arguments of the b tag are omitted.
@display("p=100,100;b=,,rect,blue")
Display strings can be placed in the parameters section of module and channel type definitions, and in submodules and connections. The following NED sample illustrates the placement of display strings in the code:
simple Server { parameters: @display("i=device/server"); ... } network Example { parameters: @display("bgi=maps/europe"); submodules: server: Server { @display("p=273,101"); } ... connections: client1.out --> { @display("ls=red,3"); } --> server.in++; }
At runtime, every module and channel object has one single display string object, which controls its appearance in various contexts. The initial value of this display string object comes from merging the @display properties occurring at various places in NED files. This section describes the rules for merging @display properties to create the module or channel's display string.
The base NED type's display string is merged into the current display string using the following rules:
The result of merging the @display properties will be used to initialize the display string object (cDisplayString) of the module or channel. The display string object can then still be modified programmatically at runtime.
Example of display string inheritance:
simple Base { @display("i=block/queue"); // use a queue icon in all instances } simple Derived extends Base { @display("i=,red,60"); // ==> "i=block/queue,red,60" } network SimpleQueue { submodules: submod: Derived { @display("i=,yellow,-;p=273,101;r=70"); // ==> "i=block/queue,yellow;p=273,101;r=70" } ... }
The following tags of the module display string are in effect in submodule context, that is, when the module is displayed as a submodule of another module:
The following sections provide an overview and examples for each tag. More detailed information, such as what each tag argument means, is available in Appendix [25].
By default, modules are displayed with a simple default icon, but OMNeT++ comes with a large set of categorized icons that one can choose from. To see what icons are available, look into the images/ folder in the OMNeT++ installation. The stock icons installed with OMNeT++ have several size variants. Most of them have very small (vs), small (s), large (l) and very large (vl) versions.
One can specify the icon with the i tag. The icon name should be given with the name of the subfolder under images/, but without the file name extension. The size may be specified with the icon name suffix (_s for very small, _vl for very large, etc.), or in a separate is tag.
An example that displays the block/source in large size:
@display("i=block/source;is=l");
Icons may also be colorized, which can often be useful. Color can indicate the status or grouping of the module, or simply serve aesthetic purposes. The following example makes the icon 20% red:
@display("i=block/source,red,20")
Modules may also display a small auxiliary icon in the top-right corner of the main icon. This icon can be useful for displaying the status of the module, for example, and can be set with the i2 tag. Icons suitable for use with i2 are in the status/ category.
An example:
@display("i=block/queue;i2=status/busy")
To have a simple but resizable representation for a module, one can use the b tag to create geometric shapes. Currently, oval and rectangle are supported.
The following example displays an oval shape of the size 70x30 with a 4-pixel black border and red fill:
@display("b=70,30,oval,red,black,4")
The p tag allows one to define the position of a submodule or otherwise affect its placement.
The following example will place the module at the given position:
@display("p=50,79");
If the submodule is a module vector, one can also specify in the p tag how to arrange the elements of the vector. They can be arranged in a row, a column, a matrix or a ring. The rest of the arguments in the p tag depend on the layout type:
TODO refine, e.g. list accepted abbreviations for matrix etc; what if x,y are missing; delta args are optional; etc
A matrix layout for a module vector (note that the first two arguments, x and y are omitted, so the submodule matrix as a whole will be placed by the layouter algorithm):
host[20]: Host { @display("p=,,m,4,50,50"); }
Layout groups allow modules that are not part of the same submodule vector to be arranged in a row, column, matrix or ring formation as described in the p tag's third (and further) parameters.
The g tag expects a single string parameter, the group name. All sibling modules that share the same group name are treated for layouting purposes as if they were part of the same submodule vector, the "index" being the order of submodules within their parent.
In wireless simulations, it is often useful to be able to display a circle or disc around the module to indicate transmission range, reception range, or interference range. This can be done with the r tag.
In the following example, the module will have a circle with a 90-unit radius around it as a range indicator:
submodules: ap: AccessPoint { @display("p=50,79;r=90"); }
If a module contains a queue object (cQueue), it is possible to let the graphical user interface display the queue length next to the module icon. To achieve that, one needs to specify the queue object's name (the string set via the setName() method) in the q display string tag. OMNeT++ finds the queue object by traversing the object tree inside the module.
The following example displays the length of the queue named "jobQueue":
@display("q=jobQueue");
It is possible to have a short text displayed next to or above the module icon or shape using the t tag. The tag lets one specify the placement (left, right, above) and the color of the text. To display text in a tooltip, use the tt tag.
The following example displays text above the module icon, and also adds tooltip text that can be seen by hovering over the module icon with the mouse.
@display("t=Packets sent: 18;tt=Additional tooltip information");
For a detailed descripton of the display string tags, check Appendix [25].
The following tags of the module display string are in effect when the module itself is opened in a GUI. These tags mostly deal with the visual properties of the background rectangle.
In the following example, the background area is defined to be 6000x4500 units, and the map of Europe is used as a background, stretched to fill the whole area. A grid is also drawn, with 1000 units between major ticks, and 2 minor ticks per major tick.
network EuropePlayground { @display("bgb=6000,4500;bgi=maps/europe,s;bgg=1000,2,grey95;bgu=km");
The bgu tag deserves special attention. It does not affect the visual appearance, but instead it is a hint for model code on how to interpret coordinates and distances in this compound module. The above example specifies bgu=km, which means that if the model attaches physical meaning to coordinates and distances, then those numbers should be interpreted as kilometers.
More detailed information, such as what each tag argument means, is available in Appendix [25].
Connections may also have display strings. Connections inherit the display string property from their channel types, in the same way as submodules inherit theirs from module types. The default display strings are empty.
Connections support the following tags:
Example of a thick, red connection:
source1.out --> { @display("ls=red,3"); } --> queue1.in++;
More detailed information, such as what each tag argument means, is available in Appendix [25].
Message display strings affect how messages are shown during animation. By default, they are displayed as a small filled circle, in one of 8 basic colors (the color is determined as message kind modulo 8), and with the message class and/or name displayed under it. The latter is configurable in the Preferences dialog of Qtenv, and message kind dependent coloring can also be turned off there.
Message objects do not store a display string by default. Instead, cMessage defines a virtual getDisplayString() method that one can override in subclasses to return an arbitrary string. The following example adds a display string to a new message class:
class Job : public cMessage { public: const char *getDisplayString() const {return "i=msg/packet;is=vs";} //... };
Since message classes are often defined in msg files (see chapter [6]), it is often convenient to let the message compiler generate the getDisplayString() method. To achieve that, add a string field named displayString with an initializer to the message definition. The message compiler will generate setDisplayString() and getDisplayString() methods into the new class, and also set the initial value in the constructor.
An example message file:
message Job { string displayString = "i=msg/package_s,kind"; //... }
The following tags can be used in message display strings:
The following example displays a small red box icon:
@display("i=msg/box,red;is=s");
The next one displays a 15x15 rectangle, with while fill, and with a border color dependent on the message kind:
@display("b=15,15,rect,white,kind,5");
More detailed information, such as what each tag argument means, is available in Appendix [25].
Parameters of the module or channel containing the display string can be substituted into the display string with the $parameterName notation:
Example:
simple MobileNode { parameters: double xpos; double ypos; string fillColor; // get the values from the module parameters xpos,ypos,fillcolor @display("p=$xpos,$ypos;b=60,10,rect,$fillColor,black,2"); }
A color may be given in several forms. One is English names: blue, lightgrey, wheat, etc.; the list includes all standard SVG color names.
Another acceptable form is the HTML RGB syntax: #rgb or #rrggbb, where r,g,b are hex digits.
It is also possible to specify colors in HSB (hue-saturation-brightness) as @hhssbb (with h, s, b being hex digits). HSB makes it easier to scale colors e.g. from white to bright red.
One can produce a transparent background by specifying a hyphen ("-") as background color.
In message display strings, kind can also be used as a special color name. It will map message kind to a color. (See the getKind() method of cMessage.)
The "i=" display string tag allows for colorization of icons. It accepts a target color and a percentage as the degree of colorization. Percentage has no effect if the target color is missing. Brightness of the icon is also affected -- to keep the original brightness, specify a color with about 50% brightness (e.g. #808080 mid-grey, #008000 mid-green).
Examples:
Colorization works with both submodule and message icons.
In the current OMNeT++ version, module icons are PNG or GIF files. The icons shipped with OMNeT++ are in the images/ subdirectory. The IDE and Qtenv need the exact location of this directory to be able to load the icons.
Icons are loaded from all directories in the image path, a semicolon-separated list of directories. The default image path is compiled into Qtenv with the value "<omnetpp>/images;./images". This works fine (unless the OMNeT++ installation is moved), and the ./images part also allows icons to be loaded from the images/ subdirectory of the current directory. As users typically run simulation models from the model's directory, this practically means that custom icons placed in the images/ subdirectory of the model's directory are automatically loaded.
The compiled-in image path can be overridden with the OMNETPP_IMAGE_PATH environment variable. The way of setting environment variables is system specific: in Unix, if one is using the bash shell, adding a line
export OMNETPP_IMAGE_PATH="$HOME/omnetpp/images;./images"
to ~/.bashrc or ~/.bash_profile will do; on Windows, environment variables can be set via the My Computer --> Properties dialog.
One can extend the image path from omnetpp.ini with the image-path option, which is prepended to the environment variable's value.
[General] image-path = "/home/you/model-framework/images;/home/you/extra-images"
Icons are organized into several categories, represented by folders. These categories include:
Icon names to be used with the i, bgi and other tags should contain the subfolder (category) name but not the file extension. For example, /opt/omnetpp/images/block/sink.png should be referred to as block/sink.
Icons come in various sizes: normal, large, small, very small, very large. Sizes are encoded into the icon name's suffix: _vl, _l, _s, _vs. In display strings, one can either use the suffix ("i=device/router_l"), or the "is" (icon size) display string tag ("i=device/router;is=l"), but not both at the same time (we recommend using the is tag.)
OMNeT++ implements an automatic layouting feature, using a variation of the Spring Embedder algorithm. Modules which have not been assigned explicit positions via the "p=" tag will be automatically placed by the algorithm.
Spring Embedder is a graph layouting algorithm based on a physical model. Graph nodes (modules) repel each other like electric charges of the same sign, and connections act as springs that pull nodes together. There is also friction built in, in order to prevent oscillation of the nodes. The layouting algorithm simulates this physical system until it reaches equilibrium (or times out). The physical rules above have been slightly tweaked to achieve better results.
The algorithm doesn't move any module which has fixed coordinates. Modules that are part of a predefined arrangement (row, matrix, ring, etc., defined via the 3rd and further args of the "p=" tag) will be moved together, to preserve their relative positions.
Caveats:
It is often useful to manipulate the display string at runtime. Changing colors, icon, or text may convey status change, and changing a module's position is useful when simulating mobile networks.
Display strings are stored in cDisplayString objects inside channels, modules and gates. cDisplayString also lets one manipulate the string.
As far as cDisplayString is concerned, a display string (e.g. "p=100,125;i=cloud") is a string that consist of several tags separated by semicolons, and each tag has a name and after an equal sign, zero or more arguments separated by commas.
The class facilitates tasks such as finding out what tags a display string has, adding new tags, adding arguments to existing tags, removing tags or replacing arguments. The internal storage method allows very fast operation; it will generally be faster than direct string manipulation. The class doesn't try to interpret the display string in any way, nor does it know the meaning of the different tags; it merely parses the string as data elements separated by semicolons, equal signs and commas.
To get a pointer to a cDisplayString object, one can call the components's getDisplayString() method.
The display string can be overwritten using the parse() method. Tag arguments can be set with setTagArg(), and tags removed with removeTag().
The following example sets a module's position, icon and status icon in one step:
cDisplayString& dispStr = getDisplayString(); dispStr.parse("p=40,20;i=device/cellphone;i2=status/disconnect");
Setting an outgoing connection's color to red:
cDisplayString& connDispStr = gate("out")->getDisplayString(); connDispStr.parse("ls=red");
Setting module background and grid with background display string tags:
cDisplayString& parentDispStr = getParentModule()->getDisplayString(); parentDispStr.parse("bgi=maps/europe;bgg=100,2");
The following example updates a display string so that it contains the p=40,20 and i=device/cellphone tags:
dispStr.setTagArg("p", 0, 40); dispStr.setTagArg("p", 1, 20); dispStr.setTagArg("i", 0, "device/cellphone");
Modules can display a transient bubble with a short message (e.g. "Going down" or "Connection estalished") by calling the bubble() method of cComponent. The method takes the string to be displayed as a const char * pointer.
An example:
bubble("Going down!");
If the module often displays bubbles, it is recommended to make the corresponding code conditional on hasGUI(). The hasGUI() method returns false if the simulation is running under Cmdenv.
if (hasGUI()) { char text[32]; sprintf(text, "Collision! (%s frames)", numCollidingFrames); bubble(text); }
The canvas is the 2D drawing API of OMNeT++. Using the canvas, one can display lines, curves, polygons, images, text items and their combinations, using colors, transparency, geometric transformations, antialiasing and more. Drawings created with the canvas API can be viewed when the simulation is run under a graphical user interface like Qtenv.
Use cases for the canvas API include displaying textual annotations, status information, live statistics in the form of plots, charts, gauges, counters, etc. Other types of simulations may call for different types of graphical presentation. For example, in mobile and wireless simulations, the canvas API can be used to draw the scene including a background (like a street map or floor plan), mobile objects (vehicles, people), obstacles (trees, buildings, hills), antennas with orientation, and also extra information like connectivity graph, movement trails, individual transmissions and so on.
An arbitrary number of drawings (canvases) can be created, and every module already has one by default. A module's default canvas is the one on which the module's submodules and internal connections are also displayed, so the canvas API can be used to enrich the default, display string based presentation of a compound module.
OMNeT++ calls the items that appear on a canvas figures. The corresponding C++ types are cCanvas and cFigure. In fact, cFigure is an abstract base class, and different kinds of figures are represented by various subclasses of cFigure.
Figures can be declared statically in NED files using @figure properties, and can also be accessed, created and manipulated programmatically at runtime.
A canvas is represented by the cCanvas C++ class. A module's default canvas can be accessed with the getCanvas() method of cModule. For example, a toplevel submodule can get hold of the network's canvas with the following line:
cCanvas *canvas = getParentModule()->getCanvas();
Using the canvas pointer, it is possible to check what figures it contains, add new figures, manipulate existing ones, and so on.
New canvases can be created by simply creating new cCanvas objects, like so:
cCanvas *canvas = new cCanvas("liveStatistics"); // arbitrary name string
To view the contents of these additional canvases in Qtenv, one needs to navigate to the canvas' owner object (which will usually be the module that created the canvas), view the list of objects it contains, and double-click the canvas in the list. Giving meaningful names to extra canvas objects like in the example above can simplify the process of locating them in the Qtenv GUI.
The base class of all figure classes is cFigure. The class hierarchy is shown in figure below.
In subsequent sections, we'll first describe features that are common to all figures, then we'll briefly cover each figure class. Finally, we'll look into how one can define new figure types.
Figures on a canvas are organized into a tree. The canvas has a (hidden) root figure, and all toplevel figures are children of the root figure. Any figure may contain child figures, not only dedicated ones like cGroupFigure.
Every figure also has a name string, inherited from cNamedObject. Since figures are in a tree, every figure also has a hierarchical name. It consists of the names of figures in the path from the root figure down to the the figure, joined with dots. (The name of the root figure itself is omitted.)
Child figures can be added to a figure with the addFigure() method, or inserted into the child list of a figure relative to a sibling with the insertBefore() / insertAfter() methods. addFigure() has two flavours: one for appending, and one for inserting at a numeric position. Child figures can be accessed by name (getFigure(name)), or enumerated by index in the child list (getFigure(k), getNumFigures()). One can obtain the index of a child figure using findFigure(). The removeFromParent() method can be used to remove a figure from its parent.
For convenience, cCanvas also has addFigure(), getFigure(), getNumFigures() and other methods for managing toplevel figures without the need to go via the root figure.
The following code enumerates the children of a figure named "group1":
cFigure *parent = canvas->getFigure("group1"); ASSERT(parent != nullptr); for (int i = 0; i < parent->getNumFigures(); i++) EV << parent->getFigure(i)->getName() << endl;
It is also possible to locate a figure by its hierarchical name (getFigureByPath()), and to find figure by its (non-hierarchical) name anywhere in a figure subtree (findFigureRecursively()).
The dup() method of figure classes only duplicates the very figure on which it was called. (The duplicate will not have ay children.) To clone a figure including children, use the dupTree() method.
As mentioned earlier, figures can be defined in the NED file, so they don't always need to be created programmatically. This possibility is useful for creating static backgrounds or statically defining placeholders for dinamically displayed items, among others. Figures defined from NED can be accessed and manipulated from C++ code in the same way as dynamically created ones.
Figures are defined in NED by adding @figure properties to a module definition. The hierarchical name of the figure goes into the property index, that is, in square brackets right after @figure. The parent of the figure must already exist, that is, when defining foo.bar.baz, both foo and foo.bar must have already been defined (in NED).
Type and various attributes of the figure go into property body, as key-valuelist pairs. type=line creates a cLineFigure, type=rectangle creates a cRectangleFigure, type=text creates a cTextFigure, and so on; the list of accepted types is given in appendix [26]. Further attributes largely correspond to getters and setters of the C++ class denoted by the type attribute.
The following example creates a green rectangle and the text "placeholder" in it in NED, and the subsequent C++ code changes the same text to "Hello World!".
NED part:
module Foo { @display("bgb=800,500"); @figure[box](type=rectangle; coords=10,50; size=200,100; fillColor=green); @figure[box.label](type=text; coords=20,80; text=placeholder); }
And the C++ part:
// we assume this code runs in a submodule of the above "Foo" module cCanvas *canvas = getParentModule()->getCanvas(); // obtain the figure pointer by hierarchical name, and change the text: cFigure *figure = canvas->getFigureByPath("box.label") cTextFigure *textFigure = check_and_cast<cTextFigure *>(figure); textFigure->setText("Hello World!");
The stacking order (a.k.a. Z-order) of figures is jointly determined by the child order and the cFigure attribute called Z-index, with the latter taking priority. Z-index is not used directly, but an effective Z-index is computed instead, as the sum of the Z-index values of the figure and all its ancestors up to the root figure.
A figure with a larger effective Z-index will be displayed above figures with smaller effective Z-indices, regardless of their positions in the figure tree. Among figures whose effective Z-indices are equal, child order determines the stacking order. If two such figures are siblings, the one that occurs later in the child list will be drawn above the other. For figures that are not siblings, the child order within the first common ancestor matters. There are several methods for managing stacking order: setZIndex(), getZIndex(), getEffectiveZIndex(), insertAbove(), insertBelow(), isAbove(), isBelow(), raiseAbove(), lowerBelow(), raiseToTop(), lowerToBottom().
One of the most powerful features of the Canvas API is being able to assign geometric transformations to figures. OMNeT++ uses 2D homogeneous transformation matrices, which are able to express affine transforms such as translation, scaling, rotation and skew (shearing). The transformation matrix used by OMNeT++ has the following format:
a | c | t1 |
b | d | t2 |
0 | 0 | 1 |
In a nutshell, given a point with its (x, y) coodinates, one can obtain the transformed version of it by multiplying the transformation matrix by the (x \ y \ 1) column vector (a.k.a. homogeneous coordinates), and dropping the third component:
x' |
y' |
1 |
a | c | t1 |
b | d | t2 |
0 | 0 | 1 |
x |
y |
1 |
The result is the point (ax+cy+t1, bx+dy+t2). As one can deduce, a, b, c, d are responsible for rotation, scaling and skew, and t1 and t2 for translation. Also, transforming a point by matrix T1 and then by T2 is equivalent to transforming the point by the matrix T2 T1 due to the associativity of matrix multiplication.
Transformation matrices are represented in OMNeT++ by the cFigure::Transform class.
A cFigure::Transform transformation matrix can be initialized in several ways. First, it is possible to assign its a, b, c, d, t1, t2 members directly (they are public), or via a six-argument constructor. However, it is usually more convenient to start from the identity transform (as created by the default constructor), and invoke one or more of its several scale(), rotate(), skewx(), skewy() and translate() member functions. They update the matrix to (also) perform the given operation (scaling, rotation, skewing or translation), as if left-multiplied by a temporary matrix that corresponds to the operation.
The multiply() method allows one to combine transformations: t1.multiply(t2) sets t1 to the product t2*t1.
To transform a point (represented by the class cFigure::Point), one can use the applyTo() method of Transform. The following code fragment should clarify this:
// allow Transform and Point to be referenced without the cFigure:: prefix typedef cFigure::Transform Transform; typedef cFigure::Point Point; // create a matrix that scales by 2, rotates by 45 degrees, and translates by (100,0) Transform t = Transform().scale(2.0).rotate(M_PI/4).translate(100,0); // apply the transform to the point (10, 20) Point p(10, 20); Point p2 = t.applyTo(p);
Every figure has an associated transformation matrix, which affects how the figure and its figure subtree are displayed. In other words, the way a figure displayed is affected by its own transformation matrix and the transformation matrices of all of its ancestors, up to the root figure of the canvas. The effective transform will be the product of those transformation matrices.
A figure's transformation matrix is directly accessible via cFigure's getTransform(), setTransform() member functions. For convenience, cFigure also has several scale(), rotate(), skewx(), skewy() and translate() member functions, which directly operate on the internal transformation matrix.
Some figures have visual aspects that are not, or only optionally affected by the transform. For example, the size and orientation of the text displayed by cLabelFigure, in contrast to that of cTextFigure, is unaffected by transforms (and of manual zoom as well). Only the position is transformed.
In addition to the translate(), scale(), rotate(), etc. functions that update the figure's transformation matrix, figures also have a move() method. move(), like translate(), also moves the figure by a dx, dy offset. However, move() works by changing the figure's coordinates, and not by changing the transformation matrix.
Since every figure class stores and interprets its position differently, move() is defined for each figure class independently. For example, cPolylineFigure's move() changes the coordinates of each point.
move() is recursive, that is, it not only moves the figure on which it was called, but also its children. There is also a non-recursive variant, called moveLocal().
Figures have a visibility flag that controls whether the figure is displayed. Hiding a figure via the flag will hide the whole figure subtree, not just the figure itself. The flag can be accessed via the isVisible(), setVisible() member functions of cFigure.
Figures can also be assigned a number of textual tags. Tags do not directly affect rendering, but graphical user interfaces that display canvas content, like Qtenv, offer functionality to interactively show/hide figures based on tags they contain. This GUI figure filter allows one to express conditions like "Show only figures that have tag foo or bar, but among them, hide those that also contain tag baz". Tag-based filtering and the visibility flag are in AND relationship -- figures hidden via setVisible(false) cannot be displayed using tags. Also when a figure is hidden using the tag filter, its figure subtree will also be hidden.
The tag list of a figure can be accessed with the getTags() and setTags() cFigure methods. They return/accept a single string that contains the tags separated by spaces (a tag itself cannot contain a space.)
Tags functionality, when used carefully, allows one to define "layers" that can be turned on/off from Qtenv.
Figures may be assigned a tooltip text using the setTooltip() method. The tooltip is shown in the runtime GUI when one hovers with the mouse over the figure.
In the visualization of many simulations, some figures correspond to certain objects in the simulation model. For example, a truck image may correspond to a module that represents the mobile node in the simulation. Or, an inflating disc that represents a wireless signal may correspond to a message (cMessage) in the simulation.
One can set the associated object on a figure using the setAssociatedObject() method. The GUI can use this information provide a shortcut access to the associated object, for example select the object in an inspector when the user clicks the figure, or display the object's tooltip over the figure if it does not have its own.
Points are represented by the cFigure::Point struct:
struct Point { double x, y; ... };
In addition to the public x, y members and a two-argument constructor for convenient initialization, the struct provides overloaded operators (+,-,*,/) and some utility functions like translate(), distanceTo() and str().
Rectangles are represented by the cFigure::Rectangle struct:
struct Rectangle { double x, y, double width, height; ... };
A rectangle is specified with the coordinates of their top-left corner, their width and height. The latter two are expected to be nonnegative. In addition to the public x, y, width, height members and a four-argument constructor for convenient initialization, the struct also has utility functions like getCenter(), getSize(), translate() and str().
Colors are represented by the cFigure::Color struct as 24-bit RGB colors:
struct Color { uint8_t red, green, blue; ... };
In addition to the public red, green, blue members and a three-argument constructor for convenient initialization, the struct also has a string-based constructor and str() function. The string form accepts various notations: HTML colors (#rrggbb), HSB colors in a similar notation (@hhssbb), and English color names (SVG and X11 color names, to be more precise.)
However, one doesn't need to use Color directly. There are also predefined constants for the basic colors (BLACK, WHITE, GREY, RED, GREEN, BLUE, YELLOW, CYAN, MAGENTA), as well as a collection of carefully chosen dark and light colors, suitable for e.g. chart drawing, in the arrays GOOD_DARK_COLORS[] and GOOD_LIGHT_COLORS[]; for convenience, the number of colors in each are in the NUM_GOOD_DARK_COLORS and NUM_GOOD_LIGHT_COLORS constants).
The following ways of specifying colors are all valid:
cFigure::BLACK; cFigure::Color("steelblue"); cFigure::Color("#3d7a8f"); cFigure::Color("@20ff80"); cFigure::GOOD_DARK_COLORS[2]; cFigure::GOOD_LIGHT_COLORS[intrand(NUM_GOOD_LIGHT_COLORS)];
The requested font for text figures is represented by the cFigure::Font struct. It stores the typeface, font style and font size in one.
struct Font { std::string typeface; int pointSize; uint8_t style; ... };
The font does not need to be fully specified, there are some defaults. When typeface is set to the empty string or when pointSize is zero or a negative value, that means that the default font or the default size should be used, respectively.
The style field can be either FONT_NONE, or the binary OR of the following constants: FONT_BOLD, FONT_ITALIC, FONT_UNDERLINE.
The struct also has a three-argument constructor for convenient initialization, and an str() function that returns a human-readable text representation of the contents.
Some examples:
cFigure::Font("Arial"); // default size, normal cFigure::Font("Arial", 12); // 12pt, normal cFigure::Font("Arial", 12, cFigure::FONT_BOLD | cFigure::FONT_ITALIC);
cFigure also contains a number of enums as inner types to describe various line, shape, text and image properties. Here they are:
LineStyle
Values: LINE_SOLID, LINE_DOTTED, LINE_DASHED
This enum (cFigure::LineStyle) is used by line and shape figures to determine their line/border style. The precise graphical interpretation, e.g. dash lengths for the dashed style, depends on the graphics library that the GUI was implemented with.
CapStyle
Values: CAP_BUTT, CAP_ROUND, CAP_SQUARE
This enum is used by line and path figures, and it indicates the shape to be used at the end of the lines or open subpaths.
JoinStyle
Values: JOIN_BEVEL, JOIN_ROUND, JOIN_MITER
This enum indicates the shape to be used when two line segments are joined, in line or shape figures.
FillRule
Values: FILL_EVENODD, FILL_NONZERO.
This enum determines which regions of a self-intersecting shape should be considered to be inside the shape, and thus be filled.
Arrowhead
Values: ARROW_NONE, ARROW_SIMPLE, ARROW_TRIANGLE, ARROW_BARBED.
Some figures support displaying arrowheads at one or both ends of a line. This enum determines the style of the arrowhead to be used.
Interpolation
Values: INTERPOLATION_NONE, INTERPOLATION_FAST, INTERPOLATION_BEST.
Interpolation is used for rendering an image when it is not displayed at its native resolution. This enum indicates the algorithm to be used for interpolation.
The mode none selects the "nearest neighbor" algorithm. Fast emphasizes speed, and best emphasizes quality; however, the exact choice of algorithm (bilinear, bicubic, quadratic, etc.) depends on features of the graphics library that the GUI was implemented with.
Anchor
Values:
ANCHOR_CENTER, ANCHOR_N, ANCHOR_E, ANCHOR_S, ANCHOR_W,
ANCHOR_NW, ANCHOR_NE, ANCHOR_SE, ANCHOR_SW;
ANCHOR_BASELINE_START, ANCHOR_BASELINE_MIDDLE,
ANCHOR_BASELINE_END.
Some figures like text and image figures are placed by specifying a single point (position) plus an anchor mode, a value from this enum. The anchor mode indicates which point of the bounding box of the figure should be positioned over the specified point. For example, when using ANCHOR_N, the figure is placed so that its top-middle point falls at the specified point.
The last three, baseline constants are only used with text figures, and indicate that the start, middle or end of the text's baseline is the anchor point.
Now that we know all about figures in general, we can look into the specific figure classes provided by OMNeT++.
cAbstractLineFigure is the common base class for various line figures, providing line color, style, width, opacity, arrowhead and other properties for them.
Line color can be set with setLineColor(), and line width with setLineWidth(). Lines can be solid, dashed, dotted, etc.; line style can be set with setLineStyle(). The default line color is black.
Lines can be partially transparent. This property can be controlled with setLineOpacity() that takes a double between 0 and 1: a zero argument means fully transparent, and one means fully opaque.
Lines can have various cap styles: butt, square, round, etc., which can be selected with setCapStyle(). Join style, which is a related property, is not part of cAbstractLineFigure but instead added to specific subclasses where it makes sense.
Lines may also be augmented with arrowheads at either or both ends. Arrowheads can be selected with setStartArrowhead() and setEndArrowhead().
Transformations such as scaling or skew do affect the width of the line as it is rendered on the canvas. Whether zooming (by the user) should also affect it can be controlled by setting a flag (setZoomLineWidth()). The default is non-zooming lines.
Specifying zero for line width is currently not allowed. To hide the line,
use setVisible(false).
cLineFigure displays a single straight line segment. The endpoints of the line can be set with the setStart()/setEnd() methods. Other properties such as color and line style are inherited from cAbstractLineFigure.
An example that draws an arrow from (0,0) to (100,100):
cLineFigure *line = new cLineFigure("line"); line->setStart(cFigure::Point(0,0)); line->setEnd(cFigure::Point(100,50)); line->setLineWidth(2); line->setEndArrowhead(cFigure::ARROW_BARBED);
The result:
cArcFigure displays an axis-aligned arc. (To display a non-axis-aligned arc, apply a transform to cArcFigure, or use cPathFigure.) The arc's geometry is determined by the bounding box of the circle or ellipse, and a start and end angle; they can be set with the setBounds(), setStartAngle() and setEndAngle() methods. Other properties such as color and line style are inherited from cAbstractLineFigure.
For angles, zero points east. Angles that go counterclockwise are positive, and those that go clockwise are negative.
Here is an example that draws a blue arc with an arrowhead that goes counter-clockwise from 3 hours to 12 hours on the clock:
cArcFigure *arc = new cArcFigure("arc"); arc->setBounds(cFigure::Rectangle(10,10,100,100)); arc->setStartAngle(0); arc->setEndAngle(M_PI/2); arc->setLineColor(cFigure::BLUE); arc->setEndArrowhead(cFigure::ARROW_BARBED);
The result:
By default, cPolylineFigure displays multiple connecting straight line segments. The class stores geometry information as a sequence of points. The line may be smoothed, so the figure can also display complex curves.
The points can be set with setPoints() that takes std::vector<Point>, or added one-by-one using addPoint(). Elements in the point list can be read and overwritten (getPoint(), setPoint()). One can also insert and remove points (insertPoint() and removePoint().
A smoothed line is drawn as a series of Bezier curves, which touch the start point of the first line segment, the end point of the last line segment, and the midpoints of intermediate line segments, while intermediate points serve as control points. Smoothing can be turned on/off using setSmooth().
Additional properties such as color and line style are inherited from cAbstractLineFigure. Line join style (which is not part of cAbstractLineFigure) can be set with setJoinStyle().
Here is an example that uses a smoothed polyline to draw a spiral:
cPolylineFigure *polyline = new cPolylineFigure("polyline"); const double C = 1.1; for (int i = 0; i < 10; i++) polyline->addPoint(cFigure::Point(5*i*cos(C*i), 5*i*sin(C*i))); polyline->move(100, 100); polyline->setSmooth(true);
The result, with both smooth=false and smooth=true:
cAbstractShapeFigure is an abstract base class for various shapes, providing line and fill color, line and fill opacity, line style, line width, and other properties for them.
Both outline and fill are optional, they can be turned on and off independently with the setOutlined() and setFilled() methods. The default is outlined but unfilled shapes.
Similar to cAbstractLineFigure, line color can be set with setLineColor(), and line width with setLineWidth(). Lines can be solid, dashed, dotted, etc.; line style can be set with setLineStyle(). The default line color is black.
Fill color can be set with setFillColor(). The default fill color is blue (although it is indifferent until one calls setFilled(true)).
Shapes can be partially transparent, and opacity can be set individually for the outline and the fill, using setLineOpacity() and setFillOpacity(). These functions accept a double between 0 and 1: a zero argument means fully transparent, and one means fully opaque.
When the outline is drawn with a width larger than one pixel, it will be drawn symmetrically, i.e. approximately 50-50% of its width will fall inside and outside the shape. (This also means that the fill and a wide outline will partially overlap, but that is only apparent if the outline is also partially transparent.)
Transformations such as scaling or skew do affect the width of the line as it is rendered on the canvas. Whether zooming (by the user) should also affect it can be controlled by setting a flag (setZoomLineWidth()). The default is non-zooming lines.
Specifying zero for line width is currently not allowed. To hide the outline, setOutlined(false) can be used.
cRectangleFigure displays an axis-aligned rectangle with optionally rounded corners. As with all shape figures, drawing of both the outline and the fill are optional. Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
The figure's geometry can be set with the setBounds() method that takes a cFigure::Rectangle. The radii for the rounded corners can be set independently for the x and y direction using setCornerRx() and setCornerRy(), or together with setCornerRadius().
The following example draws a rounded rectangle of size 160x100, filled with a "good dark color".
cRectangleFigure *rect = new cRectangleFigure("rect"); rect->setBounds(cFigure::Rectangle(100,100,160,100)); rect->setCornerRadius(5); rect->setFilled(true); rect->setFillColor(cFigure::GOOD_LIGHT_COLORS[0]);
The result:
cOvalFigure displays a circle or an axis-aligned ellipse. As with all shape figures, drawing of both the outline and the fill are optional. Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
The geometry is specified with the bounding box, and it can be set with the setBounds() method that takes a cFigure::Rectangle.
The following example draws a circle of diameter 120 with a wide dotted line.
cOvalFigure *circle = new cOvalFigure("circle"); circle->setBounds(cFigure::Rectangle(100,100,120,120)); circle->setLineWidth(2); circle->setLineStyle(cFigure::LINE_DOTTED);
The result:
cRingFigure displays a ring, with explicitly controllable inner/outer radii. The inner and outer circles (or ellipses) form the outline, and the area between them is filled. As with all shape figures, drawing of both the outline and the fill are optional. Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
The geometry is determined by the bounding box that defines the outer circle, and the x and y radii of the inner oval. They can be set with the setBounds(), setInnerRx() and setInnerRy() member functions. There is also a utility method for setting both inner radii together, named setInnerRadius().
The following example draws a ring with an outer diameter of 50 and inner diameter of 20.
cRingFigure *ring = new cRingFigure("ring"); ring->setBounds(cFigure::Rectangle(100,100,50,50)); ring->setInnerRadius(10); ring->setFilled(true); ring->setFillColor(cFigure::YELLOW);
cPieSliceFigure displays a pie slice, that is, a section of an axis-aligned disc or filled ellipse. The outline of the pie slice consists of an arc and two radii. As with all shape figures, drawing of both the outline and the fill are optional.
Similar to an arc, a pie slice is determined by the bounding box of the full disc or ellipse, and a start and an end angle. They can be set with the setBounds(), setStartAngle() and setEndAngle() methods.
For angles, zero points east. Angles that go counterclockwise are positive, and those that go clockwise are negative.
Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
The following example draws pie slice that's one third of a whole pie:
cPieSliceFigure *pieslice = new cPieSliceFigure("pieslice"); pieslice->setBounds(cFigure::Rectangle(100,100,100,100)); pieslice->setStartAngle(0); pieslice->setEndAngle(2*M_PI/3); pieslice->setFilled(true); pieslice->setLineColor(cFigure::BLUE); pieslice->setFillColor(cFigure::YELLOW);
The result:
cPolygonFigure displays a (closed) polygon, determined by a sequence of points. The polygon may be smoothed. A smoothed polygon is drawn as a series of cubic Bezier curves, where the curves touch the midpoints of the sides, and vertices serve as control points. Smoothing can be turned on/off using setSmooth().
The points can be set with setPoints() that takes std::vector<Point>, or added one-by-one using addPoint(). Elements in the point list can be read and overwritten (getPoint(), setPoint()). One can also insert and remove points (insertPoint() and removePoint().
As with all shape figures, drawing of both the outline and the fill are optional. The drawing of filled self-intersecting polygons is controlled by the fill rule, which defaults to even-odd (FILL_EVENODD), and can be set with setFillRule(). Line join style can be set with the setJoinStyle().
Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
Here is an example of a smoothed polygon that also demonstrates the use of setPoints():
cPolygonFigure *polygon = new cPolygonFigure("polygon"); std::vector<cFigure::Point> points; points.push_back(cFigure::Point(0, 100)); points.push_back(cFigure::Point(50, 100)); points.push_back(cFigure::Point(100, 100)); points.push_back(cFigure::Point(50, 50)); polygon->setPoints(points); polygon->setLineColor(cFigure::BLUE); polygon->setLineWidth(3); polygon->setSmooth(true);
The result, with both smooth=false and smooth=true:
cPathFigure displays a "path", a complex shape or line modeled after SVG paths. A path may consist of any number of straight line segments, Bezier curves and arcs. The path can be disjoint as well. Closed paths may be filled. The drawing of filled self-intersecting polygons is controlled by the fill rule property. Line and fill color, and several other properties are inherited from cAbstractShapeFigure.
A path, when given as a string, looks like this one that draws a triangle:
M 150 0 L 75 200 L 225 200 Z
It consists of a sequence of commands (M for moveto, L for lineto, etc.) that are each followed by numeric parameters (except Z). All commands can be expressed with lowercase letter, too. A capital letter means that the target point is given with absolute coordinates, a lowercase letter means they are given relative to the target point of the previous command.
cPathFigure can accept the path in string form (setPath()), or one can assemble the path with a series of method calls like addMoveTo(). The path can be cleared with the clearPath() method.
The commands with argument list and the corresponding add methods:
In the parameter lists, (x,y) are the target points (substitute (dx,dy) for
the lowercase, relative versions.) For the Bezier curves, x1,y1 and
(x2,y2) are control points. For the arc, rx and ry are the radii of the
ellipse, phi is a rotation angle in degrees for the ellipse, and
largeArc and sweep are both booleans (0 or 1) that select which portion
of the ellipse should be taken.
No matter how the path was created, the string form can be obtained with the getPath() method, and the parsed form with the getNumPathItems(), getPathItem(k) methods. The latter returns pointer to a cPathFigure::PathItem, which is a base class with subclasses for every item type.
Line join style, cap style (for open subpaths), and fill rule (for closed subpaths) can be set with the setJoinStyle(), setCapStyle(), setFillRule() methods.
cPathFigure has one more property, a (dx,dy) offset, which exists to simplify the implementation of the move() method. Offset causes the figure to be translated by the given amount for drawing. For other figure types, move() directly updates the coordinates, so it is effectively a wrapper for setPosition() or setBounds(). For path figures, implementing move() so that it updates every path item would be cumbersome and potentially also confusing for users. Instead, move() updates the offset. Offset can be set with setOffset(),
In the first example, the path is given as a string:
cPathFigure *path = new cPathFigure("path"); path->setPath("M 0 150 L 50 50 Q 20 120 100 150 Z"); path->setFilled(true); path->setLineColor(cFigure::BLUE); path->setFillColor(cFigure::YELLOW);
The second example creates the equivalent path programmatically.
cPathFigure *path2 = new cPathFigure("path"); path2->addMoveTo(0,150); path2->addLineTo(50,50); path2->addCurveTo(20,120,100,150); path2->addClosePath(); path2->setFilled(true); path2->setLineColor(cFigure::BLUE); path2->setFillColor(cFigure::YELLOW);
The result:
cAbstractTextFigure is an abstract base class for figures that display (potentially multi-line) text.
The location of the text on the canvas is determined jointly by a position and an anchor. The anchor tells how to place the text relative to the positioning point. For example, if anchor is ANCHOR_CENTER then the text is centered on the point; if anchor is ANCHOR_N then the text will be drawn so that its top center point is at the positioning point. The values ANCHOR_BASELINE_START, ANCHOR_BASELINE_MIDDLE, ANCHOR_BASELINE_END refer to the beginning, middle and end of the baseline of the (first line of the) text as anchor point. The member functions to set the positioning point and the anchor are setPosition() and setAnchor(). Anchor defaults to ANCHOR_CENTER.
The font can be set with the setFont() member function that takes cFigure::Font, a class that encapsulates typeface, font style and size. Color can be set with setColor(). The displayed text can also be partially transparent. This is controlled with the setOpacity() member function that accepts an double in the [0,1] range, 0 meaning fully transparent (invisible), and 1 meaning fully opaque.
It is also possible to have a partially transparent “halo” displayed around the text. The halo improves readability when the text is displayed over a background that has a similar color as the text, or when it overlaps with other text items. The halo can be turned on with setHalo().
cTextFigure displays text which is affected by zooming and transformations. Font, color, position, anchoring and other properties are inherited from cAbstractTextFigure.
The following example displays a text in dark blue 12-point bold Arial font.
cTextFigure *text = new cTextFigure("text"); text->setText("This is some text."); text->setPosition(cFigure::Point(100,100)); text->setAnchor(cFigure::ANCHOR_BASELINE_MIDDLE); text->setColor(cFigure::Color("#000040")); text->setFont(cFigure::Font("Arial", 12, cFigure::FONT_BOLD));
The result:
cLabelFigure displays text which is unaffected by zooming or transformations, except its position. Font, color, position, anchoring and other properties are inherited from cAbstractTextFigure. The angle of the label can be set with the setAngle() method. Zero angle means horizontal (unrotated) text. Positive values rotate counterclockwise, while negative values rotate clockwise.
The following example displays a label in Courier New with the default size, slightly transparent.
cLabelFigure *label = new cLabelFigure("label"); label->setText("This is a label."); label->setPosition(cFigure::Point(100,100)); label->setAnchor(cFigure::ANCHOR_NW); label->setFont(cFigure::Font("Courier New")); label->setOpacity(0.9);
The result:
cAbstractImageFigure is an abstract base class for image figures.
The location of the image on the canvas is determined jointly by a position and an anchor. The anchor tells how to place the image relative to the positioning point. For example, if anchor is ANCHOR_CENTER then the image is centered on the point; if anchor is ANCHOR_N then the image will be drawn so that its top center point is at the positioning point. The member functions to set the positioning point and the anchor are setPosition() and setAnchor(). Anchor defaults to ANCHOR_CENTER.
By default, the figure's width/height will be taken from the image's dimensions in pixels. This can be overridden with thesetWidth() / setHeight() methods, causing the image to be scaled. setWidth(0) / setHeight(0) reset the default (automatic) width and height.
One can choose from several interpolation modes that control how the image is rendered when it is not drawn in its natural size. Interpolation mode can be set with setInterpolation(), and defaults to INTERPOLATION_FAST.
Images can be tinted; this feature is controlled by a tint color and a tint amount, a [0,1] real number. They can be set with the setTintColor() and setTintAmount() methods, respectively.
Images may also be rendered as partially transparent, which is controlled by the opacity property, a [0,1] real number. Opacity can be set with the setOpacity() method. The rendering process will combine this property with the transparency information contained within the image, i.e. the alpha channel.
cImageFigure displays an image, typically an icon or a background image, loaded from the OMNeT++ image path. Positioning and other properties are inherited from cAbstractImageFigure. Unlike cIconFigure, cImageFigure fully obeys transforms and zoom.
The following example displays a map:
cImageFigure *image = new cImageFigure("map"); image->setPosition(cFigure::Point(0,0)); image->setAnchor(cFigure::ANCHOR_NW); image->setImageName("maps/europe"); image->setWidth(600); image->setHeight(500);
cIconFigure displays a non-zooming image, loaded from the OMNeT++ image path. Positioning and other properties are inherited from cAbstractImageFigure.
cIconFigure is not affected by transforms or zoom, except its position. (It can still be resized, though, via setWidth() / setHeight().)
The following example displays an icon similar to the way the "i=block/sink,gold,30" display string tag would, and makes it slightly transparent:
cIconFigure *icon = new cIconFigure("icon"); icon->setPosition(cFigure::Point(100,100)); icon->setImageName("block/sink"); icon->setTintColor(cFigure::Color("gold")); icon->setTintAmount(0.6); icon->setOpacity(0.8);
The result:
cPixmapFigure displays a user-defined raster image. A pixmap figure may be used to display e.g. a heat map. Support for scaling and various interpolation modes are useful here. Positioning and other properties are inherited from cAbstractImageFigure.
A pixmap itself is represented by the cFigure::Pixmap class.
cFigure::Pixmap stores a rectangular array of 32-bit RGBA pixels, and allows pixels to be manipulated directly. The size ($width x height$) as well as the default fill can be specified in the constructor. The pixmap can be resized (i.e. pixels added/removed at the right and/or bottom) using setSize(), and it can be filled with a color using fill(). Pixels can be directly accessed with pixel(x,y).
A pixel is returned as type cFigure::RGBA, which is a convenience struct that, in addition to having the four public uint8_t fields (red, green, blue, alpha), is augmented with several utility methods.
Many Pixmap and RGBA methods accept or return cFigure::Color and opacity, converting between them and RGBA. (Opacity is a [0,1] real number that is mapped to the 0..255 alpha channel. 0 means fully transparent, and 1 means fully opaque.)
One can set up and manipulate the image that cPixmapFigure displays in two ways. First, one can create and fill a cFigure::Pixmap separately, and set it on cPixmapFigure using setPixmap(). This will overwrite the figure's internal pixmap instance that it displays. The second way is to utilize cPixmapFigure's methods such as setPixmapSize(), fill(), setPixel(), setPixelColor(), setPixelOpacity(), etc. that delegate to the internal pixmap instance.
The following example displays a small heat map by manipulating the transparency of the pixels. The 9-by-9 pixel image is stretched to 100 units each direction on the screen.
cPixmapFigure *pixmapFigure = new cPixmapFigure("pixmap"); pixmapFigure->setPosition(cFigure::Point(100,100)); pixmapFigure->setSize(100, 100); pixmapFigure->setPixmapSize(9, 9, cFigure::BLUE, 1); for (int y = 0; y < pixmapFigure->getPixmapHeight(); y++) { for (int x = 0; x < pixmapFigure->getPixmapWidth(); x++) { double opacity = 1 - sqrt((x-4)*(x-4) + (y-4)*(y-4))/4; if (opacity < 0) opacity = 0; pixmapFigure->setPixelOpacity(x, y, opacity); } } pixmapFigure->setInterpolation(cFigure::INTERPOLATION_FAST);
The result, both with interpolation=NONE and interpolation=FAST:
cGroupFigure is for the sole purpose of grouping its children. It has no visual appearance. The usefulness of a group figure comes from the fact that elements of a group can be hidden / shown together, and also transformations are inherited from parent to child, thus, children of a group can be moved, scaled, rotated, etc. together by updating the group's transformation matrix.
The following example creates a group with two subfigures, then moves and rotates them as one unit.
cGroupFigure *group = new cGroupFigure("group"); cRectangleFigure *rect = new cRectangleFigure("rect"); rect->setBounds(cFigure::Rectangle(-50,0,100,40)); rect->setCornerRadius(5); rect->setFilled(true); rect->setFillColor(cFigure::YELLOW); cLineFigure *line = new cLineFigure("line"); line->setStart(cFigure::Point(-80,50)); line->setEnd(cFigure::Point(80,50)); line->setLineWidth(3); group->addFigure(rect); group->addFigure(line); group->translate(100, 100); group->rotate(M_PI/6, 100, 100);
The result:
cPanelFigure is similar to cGroupFigure in that it is also intended for grouping its children and has no visual appearance of its own. However, it has a special behavior regarding transformations and especially zooming.
cPanelFigure sets up an axis-aligned, unscaled coordinate system for its children, canceling the effect of any transformation (scaling, rotation, etc.) inherited from ancestor figures. This allows for pixel-based positioning of children, and makes them immune to zooming.
Unlike cGroupFigure which itself has position attribute, cPanelFigure uses two points for positioning, a position and an anchor point. Position is interpreted in the coordinate system of the panel figure's parent, while the anchor point is interpreted in the coordinate system of the panel figure itself. To place the panel figure on the canvas, the panel's anchor point is mapped to position in the parent.
Setting a transformation on the panel figure itself allows for rotation, scaling, and skewing of its children. The anchor point is also affected by this transformation.
The following example demonstrates cPanelFigure behavior. It creates a normal group figure as parent for the panel, and sets up a skewed coordinate system on it. A reference image is also added to it, in order to make the effect of skew visible. The panel figure is also added to it as a child. The panel contains an image (showing the same icon as the reference image), and a border around it.
cGroupFigure *layer = new cGroupFigure("parent"); layer->skewx(-0.3); cImageFigure *referenceImg = new cImageFigure("ref"); referenceImg->setImageName("block/broadcast"); referenceImg->setPosition(cFigure::Point(200,200)); referenceImg->setOpacity(0.3); layer->addFigure(referenceImg); cPanelFigure *panel = new cPanelFigure("panel"); cImageFigure *img = new cImageFigure("img"); img->setImageName("block/broadcast"); img->setPosition(cFigure::Point(0,0)); panel->addFigure(img); cRectangleFigure *border = new cRectangleFigure("border"); border->setBounds(cFigure::Rectangle(-25,-25,50,50)); border->setLineWidth(3); panel->addFigure(border); layer->addFigure(panel); panel->setAnchorPoint(cFigure::Point(0,0)); panel->setPosition(cFigure::Point(210,200));
The screenshot shows the result at an approx. 4x zoom level. The large semi-transparent image is the reference image, the smaller one is the image within the panel figure. Note that neither the skew nor the zoom has affected the panel figure's children.
Any graphics can be built using primitive (i.e. elementary) figures alone. However, when the graphical presentation of a simulation grows complex, it is often convenient to be able to group certain figures and treat them as a single unit. For example, although a bar chart can be displayed using several independent rectangle, line and text items, there are clearly benefits to being able to handle them together as a single bar chart object.
Compound figures are cFigure sublasses that are themselves composed of several figures, but can be instantiated and manipulated as a single figure. Compound figure classes can be used from C++ code like normal figures, and can also be made to be able to be instatiated from @figure properties.
Compound figure classes usually subclass from cGroupFigure. The class would typically maintain pointers to its subfigures in class members, and its methods (getters, setters, etc.) would operate on the subfigures.
To be able to use the new C++ class with @figure, it needs to be registered using the Register_Figure() macro. The macro expects two arguments: one is the type name by which the figure is known to @figure (the string to be used with the type property key), and the other is the C++ class name. For example, to be able to instantiate a class named FooFigure with @figure[...](type=foo;...), the following line needs to be added into the C++ source:
Register_Figure("foo", FooFigure);
If the figure needs to be able take values from @figure properties, the class needs to override the parse(cProperty*) method, and proably also getAllowedPropertyKeys(). We recommend that you examine the code of the figure classes built into OMNeT++ for implementation hints.
Most figures are entirely passive objects. When they need to be moved or updated during the course of the simulation, there must be an active component in the simulation that does it for them. Usually it is the refreshDisplay() method of some simple module (or modules) that contain the code that updates various properties of the figures.
However, certain figures can benefit from being able to refresh themselves during the simulation. Picture, for example, a compound figure (see previous section) that displays a line chart which is continually updated with new data as the simulation progresses. The LineChartFigure class may contain an addDataPoint(x,y) method which is called from other parts of the simulation to add new data to the chart. The question is when to update the subfigures that make up the chart: the line(s), axis ticks and labels, etc. It is clearly not very efficient to do it in every addDataPoint(x,y) call, especially when the simulation is running in Express mode when the screen is not refreshed very often. Luckily, our hypothetical LineChartFigure class can do better, and only refresh its subfigures when it matters, i.e. when the result can actually be seen in the GUI. To do that, the class needs to override cFigure's refreshDisplay() method, and place the subfigure updating code there.
Figure classes that override refreshDisplay() to refresh their own contents are called self-refreshing figures. Self-refreshing figures as a feature are available since OMNeT++ version 5.1.
refreshDisplay() is declared on cFigure as:
virtual void refreshDisplay();
The default implementation does nothing.
Like cModule's refreshDisplay(), cFigure's refreshDisplay() is invoked only under graphical user interfaces (Qtenv), and right before display updates. However, it is only invoked for figures on canvases that are currently displayed. This makes it possible for canvases that are never viewed to have zero refresh overhead.
Since cFigure's refreshDisplay() is only invoked when the canvas is visible, it should only be used to update local state, i.e. only local members and local subfigures. The code should certainly not access other canvases, let alone change the state of the simulation.
In rare cases it might be necessary to create figure types where the rendering is entirely custom, and not based on already existing figures. The difficulty arises from the point that figures are only data storage classes, actual drawing takes place in the GUI library such as Qtenv. Thus, in addition to writing the new figure class, one also needs to extend Qtenv with the corresponding rendering code. We won't go into full details on how to extend Qtenv here, just give you a few pointers in case you need it.
In Qtenv, rendering is done with the help of figure renderer classes that have a class hierarchy roughly parallel to the cFigure inheritance tree. The base classes are incidentally called FigureRenderer. How figure renderers do their job may be different in various graphical runtime interfaces; in Qtenv, they create and manipulate QGraphicsItems on a QGraphicsView. To be able to render a new figure type, one needs to create the appropriate figure renderer classes for Qtenv.
The names of the renderer classes are provided by the figures themselves, by their getRendererClassName() methods. For example, cLineFigure's getRendererClassName() returns LineFigureRenderer. Qtenv qualifies that with its own namespace, and looks for a registered class named omnetpp::qtenv::LineFigureRenderer. If such class exists and is a Qtenv figure renderer (the appropriate dynamic_cast succeeds), an instance of that class will be used to render the figure, otherwise an error message will be issued.
OMNeT++ lets one build advanced 3D visualization for simulation models. 3D visualization is useful for wide range of simulations, including mobile wireless networks, transportation models, factory floorplan simulations and more. One can visualize terrain, roads, urban street networks, indoor environments, satellites, and more. It is possible to augment the 3D scene with various annotations. For wireless network simulations, for example, one can create a scene that, in addition to the faithful representation of the physical world, also displays the transmission range of wireless nodes, their connectivity graph and various statistics, indicates individual wireless transmissions or traffic intensity, and so on.
In OMNeT++, 3D visualization is completely distinct from display string-based and canvas-based visualization. The scene appears on a separate GUI area.
OMNeT++'s 3D visualization is based on the open-source OpenSceneGraph and osgEarth libraries. These libraries offer high-level functionality, such as the ability of using 3D model files directly, accessing and rendering online map and satellite imagery data sources, and so on.
OpenSceneGraph (openscenegraph.org), or OSG for short, is the base library. It is best to quote their web site:
“OpenSceneGraph is an open source high performance 3D graphics toolkit, used by application developers in fields such as visual simulation, games, virtual reality, scientific visualization and modeling. Written entirely in Standard C++ and OpenGL, it runs on all Windows platforms, OS X, GNU/Linux, IRIX, Solaris, HP-UX, AIX and FreeBSD operating systems. OpenSceneGraph is now well established as the world leading scene graph technology, used widely in the vis-sim, space, scientific, oil-gas, games and virtual reality industries.”
In turn, osgEarth (osgearth.org) is a geospatial SDK and terrain engine built on top of OpenSceneGraph, not quite unlike Google Earth. It has many attractive features:
In OMNeT++, osgEarth can be very useful for simulations involving maps, terrain, or satellites.
For 3D visualization, OMNeT++ basically exposes the OpenSceneGraph API. One needs to assemble an OSG scene graph in the model, and give it to OMNeT++ for display. The scene graph can be updated at runtime, and changes will be reflected in the display.
When a scene graph has been built by the simulation model, it needs to be given to a cOsgCanvas object to let the OMNeT++ GUI know about it. cOsgCanvas wraps a scene graph, plus hints for the GUI on how to best display the scene, for example the default camera position. In the GUI, the user can use the mouse to manipulate the camera to view the scene from various angles and distances, look at various parts of the scene, and so on.
It is important to note that the simulation model may only manipulate the scene graph, but it cannot directly access the viewer in the GUI. There is a very specific technical reason for that. The viewer may not even exist or may be displaying a different scene graph at the time the model tries to access it. The model may even be running under a non-GUI user interface (e.g. Cmdenv) where a viewer is not even part of the program. The viewer may only be influenced in the form of viewer hints in cOsgCanvas.
Every module has a built-in (default) cOsgCanvas, which can be accessed with the getOsgCanvas() method of cModule. For example, a toplevel submodule can get hold of the network's OSG canvas with the following line:
cOsgCanvas *osgCanvas = getParentModule()->getOsgCanvas();
Additional cOsgCanvas instances may be created simply with new:
cOsgCanvas *osgCanvas = new cOsgCanvas("scene2");
Once a scene graph has been assembled, it can be set on cOsgCanvas with the setScene() method.
osg::Node *scene = ... osgCanvas->setScene(scene);
Subsequent changes in the scene graph will be automatically reflected in the visualization, there is no need to call setScene() again or otherwise let OMNeT++ know about the changes.
There are several hints that the 3D viewer may take into account when displaying the scene graph. Note that hints are only hints, so the viewer may choose to ignore them, and the user may also be able to override them interactively, (using the mouse, via the context menu, hotkeys or by other means).
An example code fragment that sets some viewer hints:
osgCanvas->setViewerStyle(cOsgCanvas::STYLE_GENERIC); osgCanvas->setCameraManipulatorType(cOsgCanvas::CAM_OVERVIEW); osgCanvas->setClearColor(cOsgCanvas::Color("skyblue")); osgCanvas->setGenericViewpoint(cOsgCanvas::Viewpoint( cOsgCanvas::Vec3d(20, -30, 30), // observer cOsgCanvas::Vec3d(30, 20, 0), // focal point cOsgCanvas::Vec3d(0, 0, 1))); // UP
If a 3D object in the scene represents a C++ object in the simulation, it would often be very convenient to be able to select that object for inspection by clicking it with the mouse.
OMNeT++ provides a wrapper node that associates its children with a particular OMNeT++ object (cObject descendant), making them selectable in the 3D viewer. The wrapper class is called cObjectOsgNode, and it subclasses from osg::Group.
auto objectNode = new cObjectOsgNode(myModule); objectNode->addChild(myNode);
3D visualizations often need to load external resources from disk, for example images or 3D models. By default, OSG tries to load these files from the current working directory (unless they are given with absolute path). However, loading from the folder of the current OMNeT++ module, from the folder of the ini file, or from the image path would often be more convenient. OMNeT++ contains a function for that purpose.
The resolveResourcePath() method of modules and channels accepts a file name (or relative path) as input, and looks into a number of convenient locations to find the file. The list of the search folders includes the current working directory, the folder of the main ini file, and the folder of the NED file that defined the module or channel. If the resource is found, the function returns the full path; otherwise it returns the empty string.
The function also looks into folders on the NED path and the image path, i.e. the roots of the NED and image folder trees. These search locations allow one to load files by full NED package name (but using slashes instead of dots), or access an icon with its full name (e.g. block/sink).
An example that attempts to load a car.osgb model file:
std::string fileLoc = resolveResourcePath("car.osgb"); if (fileLoc == "") throw cRuntimeError("car.osgb not found"); auto node = osgDB::readNodeFile(fileLoc); // use the resolved path
OSG and osgEarth are optional in OMNeT++, and may not be available in all installations. However, one probably wants simulation models to compile even if the particular OMNeT++ installation doesn't contain the OSG and osgEarth libraries. This can be achieved by conditional compilation.
OMNeT++ detects the OSG and osgEarth libraries and defines the WITH_OSG macro if they are present. OSG-specific code needs to be surrounded with #ifdef WITH_OSG.
An example:
... #ifdef WITH_OSG #include <osgDB/ReadFile> #endif void DemoModule::initialize() { #ifdef WITH_OSG cOsgCanvas *osgCanvas = getParentModule()->getOsgCanvas(); osg::Node *scene = ... // assemble scene graph here osgCanvas->setScene(scene); osgCanvas->setClearColor(cOsgCanvas::Color(0,0,64)); // hint #endif }
OSG and osgEarth are comprised of several libraries. By default, OMNeT++ links simulations with only a subset of them: osg, osgGA, osgViewer, osgQt, osgEarth, osgEarthUtil. When additional OSG and osgEarth libraries are needed, one needs to ensure that those libraries are linked to the model as well. The best way to achieve that is to use the following code fragment in the makefrag file of the project:
ifneq ($(OSG_LIBS),) LIBS += $(OSG_LIBS) -losgDB -losgAnimation ... # additional OSG libs endif ifneq ($(OSGEARTH_LIBS),) LIBS += $(OSGEARTH_LIBS) -losgEarthFeatures -losgEarthSymbology ... endif
The ifneq() statements ensure that LIBS is only updated if OMNeT++ has detected the presence of OSG/osgEarth in the first place.
OpenScenegraph is a sizable library with 16+ namespaces and 40+ osg::Node subclasses, and we cannot fully document it here due to size constraints. Instead, in the next sections we have collected some practical advice and useful code snippets to help the reader get started. More information can be found on the openscenegraph.org web site, in dedicated OpenSceneGraph books (some of which are freely available), and in other online resources. We list some OSG-related resources at the end of this chapter.
To display a 3D model in the canvas of a compound module, an osg::Node has to be provided as the root of the scene.
One method of getting such a Node is to load it from a file containing the model. This can be done with the osgDB::readNodeFile() method (or with one of its variants). This method takes a string as argument, and based on the protocol specification and extension(s), finds a suitable loader for it, loads it, finally returns with a pointer to the newly created osg::Node instance.
This node can now be set on the canvas for display with the setScene() method, as seen in the osg-intro sample (among others):
osg::Node *model = osgDB::readNodeFile("model.osgb"); getParentModule()->getOsgCanvas()->setScene(model);
There is support for so-called "pseudo loaders" in OSG, which provide additional options for loading models. Those allow for some basic operations to be performed on the model after it is loaded. To use them, simply append the parameters for the modifier followed by the name of it to the end of the file name upon loading the model.
Take this line from the osg-earth sample for example:
*.cow[*].modelURL = "cow.osgb.2.scale.0,0,90.rot.0,0,-15e-1.trans"
This string will first scale the original cow model in cow.osgb to 200%, then rotate it 90 degrees around the Z axis and finally translate it 1.5 units downwards. The floating point numbers have to be represented in scientific notation to avoid the usage of decimal points or commas in the number as those are already used as operator and parameter separators.
Note that these modifiers operate directly on the model data and are independent of any further dynamic transformations applied to the node when it is placed in the scene. For further information refer to the OSG knowledge base.
Shapes can also be built programatically. For that, one needs to use the osg::Geode, osg::ShapeDrawable and osg::Shape classes.
To create a shape, one first needs to create an osg::Shape. osg::Shape is an abstract class and it has several subclasses, like osg::Box, osg::Sphere, osg::Cone, osg::Cylinder or osg::Capsule. That object is only an abstract definition of the shape, and cannot be drawn on its own. To make it drawable, one needs to create an osg::ShapeDrawable for it. However, an osg::ShapeDrawable still cannot be attached to the scene, as it is still not an osg::Node. The osg::ShapeDrawable must be added to an osg::Geode (geometry node) to be able to insert it into the scene. This object can then be added to the scene and positioned and oriented freely, just like any other osg::Node.
For an example of this see the following snippet from the osg-satellites sample. This code creates an osg::Cone and adds it to the scene.
auto cone = new osg::Cone(osg::Vec3(0, 0, -coneRadius*0.75), coneHeight, coneRadius); auto coneDrawable = new osg::ShapeDrawable(cone); auto coneGeode = new osg::Geode; coneGeode->addDrawable(coneDrawable); locatorNode->addChild(coneGeode);
Note that a single ost::Shape instance can be used to construct many osg::ShapeDrawables, and a single osg::ShapeDrawable can be added to any number of osg::Geodes to make it appear in multiple places or sizes in the scene. This can in fact improve rendering performance.
One way to position and orient nodes is by making them children of an osg::PositionAttitudeTransform. This node provides methods to set the position, orientation and scale of its children. Orientation is done with quaternions (osg::Quat). Quaternions can be constructed from an axis of rotation and a rotation angle around the axis.
The following example places a node at the (x, y, z) coordinates and rotates it around the Z axis by heading radians to make it point in the right direction.
osg::Node *objectNode = ...; auto transformNode = new osg::PositionAttitudeTransform(); transformNode->addChild(objectNode); transformNode->setPosition(osg::Vec3d(x, y, z)); double heading = ...; // (in radians) transformNode->setAttitude(osg::Quat(heading, osg::Vec3d(0, 0, 1)));
OSG makes it possible to display text or image labels in the scene. Labels are rotated to be always parallel to the screen, and scaled to appear in a constant size. In the following we'll show an example where we create a label and display it relative to an arbitrary node.
First, the label has to be created:
auto label = new osgText::Text(); label->setCharacterSize(18); label->setBoundingBoxColor(osg::Vec4(1.0, 1.0, 1.0, 0.5)); // RGBA label->setColor(osg::Vec4(0.0, 0.0, 0.0, 1.0)); // RGBA label->setAlignment(osgText::Text::CENTER_BOTTOM); label->setText("Hello World"); label->setDrawMode(osgText::Text::FILLEDBOUNDINGBOX | osgText::Text::TEXT);
Or if desired, a textured rectangle with an image:
auto image = osgDB::readImageFile("myicon.png"); auto texture = new osg::Texture2D(); texture->setImage(image); auto icon = osg::createTexturedQuadGeometry(osg::Vec3(0.0, 0.0, 0.0), osg::Vec3(image->s(), 0.0, 0.0), osg::Vec3(0.0, image->t(), 0.0), 0.0, 0.0, 1.0, 1.0); icon->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture); icon->getOrCreateStateSet()->setMode(GL_DEPTH_TEST, osg::StateAttribute::ON);
If the image has transparent parts, one also needs the following lines:
icon->getOrCreateStateSet()->setMode(GL_BLEND, osg::StateAttribute::ON); icon->getOrCreateStateSet()->setRenderingHint(osg::StateSet::TRANSPARENT_BIN);
The icon and/or label needs an osg::Geode to be placed in the scene. Lighting is best disabled for the label.
auto geode = new osg::Geode(); geode->getOrCreateStateSet()->setMode(GL_LIGHTING, osg::StateAttribute::OFF | osg::StateAttribute::OVERRIDE); double labelSpacing = 2; label->setPosition(osg::Vec3(0.0, labelSpacing, 0.0)); geode->addDrawable(label); geode->addDrawable(icon);
This osg::Geode should be made a child of an osg::AutoTransform node, which applies the correct transformations to it for the label-like behaviour to happen:
auto autoTransform = new osg::AutoTransform(); autoTransform->setAutoScaleToScreen(true); autoTransform->setAutoRotateMode(osg::AutoTransform::ROTATE_TO_SCREEN); autoTransform->addChild(geode);
This autoTransform can now be made a child of the modelToTransform, and moved with it.Alternatively, both can be added to a new osg::Group, as siblings, and handled together using that.
We want the label to appear relative to an object called modelNode. One way would be to make autoTransform the child of modelNode, but here we rather place both of them under an osg::Group. The group should be inserted
auto modelNode = ... ; auto group = new osg::Group(); group->addChild(modelNode); group->addChild(autoTransform);
To place the label above the object, we set its position to (0,0,z), where z is the radius of the object's bounding sphere.
auto boundingSphere = modelNode->getBound(); autoTransform->setPosition(osg::Vec3d(0.0, 0.0, boundingSphere.radius()));
To draw a line between two points in the scene, first the two points have to be added into an osg::Vec3Array. Then an osg::DrawArrays should be created to specify which part of the array needs to be drawn. In this case, it is obviously two points, starting from the one at index 0. Finally, an osg::Geometry is necessary to join the two together.
auto vertices = new osg::Vec3Array(); vertices->push_back(osg::Vec3(begin_x, begin_y, begin_z)); vertices->push_back(osg::Vec3(end_x, end_y, end_z)); auto drawArrays = new osg::DrawArrays(osg::PrimitiveSet::LINE_STRIP); drawArrays->setFirst(0); drawArrays->setCount(vertices->size()); auto geometry = new osg::Geometry(); geometry->setVertexArray(vertices); geometry->addPrimitiveSet(drawArrays);
The resulting osg::Geometry must be added to an osg::Geode (geometry node), which makes it possible to add it to the scene.
auto geode = new osg::Geode(); geode->addDrawable(geometry);
To change some visual properties of the line, the osg::StateSet of the osg::Geode has to be modified. The width of the line, for example, is controlled by a osg::StateAttribute called osg::LineWidth.
float width = ...; auto stateSet = geode->getOrCreateStateSet(); auto lineWidth = new osg::LineWidth(); lineWidth->setWidth(width); stateSet->setAttributeAndModes(lineWidth, osg::StateAttribute::ON);
Because of how osg::Geometry is rendered, the specified line width will always be constant on the screen (measured in pixels), and will not vary based on the distance from the camera. To achieve that effect, a long and thin osg::Cylinder could be used instead.
Changing the color of the line can be achieved by setting an appropriate
osg::Material on the osg::StateSet. It is recommended to
disable lighting for the line, otherwise it might appear in a different color,
depending on where it is viewed from or what was rendered just before
it.
auto material = new osg::Material(); osg::Vec4 colorVec(red, green, blue, opacity); // all between 0.0 and 1.0 material->setAmbient(Material::FRONT_AND_BACK, colorVec); material->setDiffuse(Material::FRONT_AND_BACK, colorVec); material->setAlpha(Material::FRONT_AND_BACK, opacity); stateSet->setAttribute(material); stateSet->setMode(GL_LIGHTING, osg::StateAttribute::OFF | osg::StateAttribute::OVERRIDE);
Independent of how the scene has been constructed, it is always important to keep track of how the individual nodes are related to each other in the scene graph. This is because every modification of an osg::Node is by default propagated to all of its children, let it be a transformation, a render state variable, or some other flag.
For really simple scenes it might be enough to have an osg::Group as the root node, and make every other object a direct child of that. This reduces the complications and avoids any strange surprises regarding state inheritance. For more complex scenes it is advisable to follow the logical hierarchy of the displayed objects in the scene graph.
Once the desired object has been created and added to the scene, it can be easily moved and oriented to represent the state of the simulation by making it a child of an osg::PositionAttitudeTransform node.
In simple cases, when there is only a single animation, and it is set up to play in a loop automatically (like the walking man in the osg-indoor sample simulation), there is no need to explicitly control it (provided it is the desired behaviour.)
Otherwise, the individual actions can be controlled by an osgAnimation::AnimationManager, with methods like playAnimation(), stopAnimation(), isPlaying(), etc. Animation managers can be found among the descendants of the loaded osg::Nodes which are animated, for example using a custom osg::NodeVisitor:
osg::Node *objectNode = osgDB::readNodeFile( ... ); struct AnimationManagerFinder : public osg::NodeVisitor { osgAnimation::BasicAnimationManager *result = nullptr; AnimationManagerFinder() : osg::NodeVisitor(osg::NodeVisitor::TRAVERSE_ALL_CHILDREN) {} void apply(osg::Node& node) { if (result) return; // already found it if (osgAnimation::AnimationManagerBase* b = dynamic_cast<osgAnimation::AnimationManagerBase*>( node.getUpdateCallback())) { result = new osgAnimation::BasicAnimationManager(*b); return; } traverse(node); } } finder; objectNode->accept(finder); animationManager = finder.result;
This visitor simply finds the first node in the subtree which has an update callback of type osgAnimation::AnimationManagerBase. Its result is a new osgAnimation::BasicAnimationManager created from the base.
This new animationManager now has to be set as an update callback on the objectNode to be able to actually drive the animations. Then any animation in the list returned by getAnimationList() can be set up as needed and played.
objectNode->setUpdateCallback(animationManager); auto animation = animationManager->getAnimationList().front(); animation->setPlayMode(osgAnimation::Animation::STAY); animation->setDuration(2); animationManager->playAnimation(animation);
Every osg::Drawable can have an osg::StateSet attached to it. An easy way of accessing it is via the getOrCreateStateSet() method of the drawable node. An osg::StateSet encapsulates a subset of the OpenGL state, and can be used to modify various rendering parameters, for example the used textures, shader programs and their parameters, color and material, face culling, depth and stencil options, and many more osg::StateAttributes.
The following example enables blending for a node and sets up a transparent, colored material to be used for rendering it, through its osg::StateSet.
auto stateSet = node->getOrCreateStateSet(); stateSet->setMode(GL_BLEND, osg::StateAttribute::ON); auto matColor = osg::Vec4(red, green, blue, alpha); // all between 0.0 and 1.0 auto material = new osg::Material; material->setEmission(osg::Material::FRONT, matColor); material->setDiffuse(osg::Material::FRONT, matColor); material->setAmbient(osg::Material::FRONT, matColor); material->setAlpha(osg::Material::FRONT, alpha); stateSet->setAttributeAndModes(material, osg::StateAttribute::OVERRIDE);
To help OSG with the correct rendering of objects with transparency, they should be placed in the TRANSPARENT_BIN by setting up a rendering hint on their osg::StateSet. This ensures that they will be drawn after all fully opaque objects, and in decreasing order of their distance from the camera. When there are multiple transparent objects intersecting each other in the scene (like the transmission “bubbles” in the BostonPark configuration of the osg-earth sample simulation), there is no order in which they would appear correctly. A solution for these cases is to disable writing to the depth buffer during their rendering using the osg::Depth attribute.
stateSet->setRenderingHint(osg::StateSet::TRANSPARENT_BIN); osg::Depth* depth = new osg::Depth; depth->setWriteMask(false); stateSet->setAttributeAndModes(depth, osg::StateAttribute::ON);
Please note that this still does not guarentee a completely physically accurate look, since that is a much harder problem to solve, but at least minimizes the obvious visual artifacts. Also, too many transparent objects might decrease performance, so wildly overusing them is to be avoided.
osgEarth is a cross-platform terrain and mapping SDK built on top of OpenSceneGraph. The most visible feature of osgEarth is that it adds support for loading .earth files to osgDB::readNodeFile(). An .earth file specifies contents and appearance of the displayed globe. This can be as simple as a single image textured over a sphere or as complex as realistic terrain data and satellite images complete with street and building information dynamically streamed over the internet from a publicly available provider, thanks to the flexibility of osgEarth. osgEarth also defines additional APIs to help with coordinate conversions and other tasks. Other than that, one's OSG knowledge is also applicable when building osgEarth scenes.
The next sections contain some tips and code fragments to help the reader get started with osgEarth. As with OSG, there are numerous other sources of information, both printed and online, when the info contained herein is insufficient.
When the osgEarth plugin is used to display a map as the visual environment of the simulation, its appearance can be described in a .earth file.
It can be loaded using the osgDB::readNodeFile() method, just like any other regular model. The resulting osg::Node will contain a node with a type of osgEarth::MapNode, which can be easily found using the osgEarth::MapNode::findMapNode() function. This node serves as the data model that contains all the data specified in the .earth file.
auto earth = osgDB::readNodeFile("example.earth"); auto mapNode = osgEarth::MapNode::findMapNode(earth);
An .earth file can specify a wide variety of options. The type attribute of the map tag (which is always the root of the document) lets the user select whether the terrain should be projected onto a flat plane (projected), or rendered as a geoid (geocentric).
Where the texture of the terrain is acquired from is specified by image tags. Many different kinds of sources are supported, including local files and popular online map sources with open access like MapQuest or OpenStreetMap. These can display different kinds of graphics, like satellite imagery, street or terrain maps, or other features the given on-line service provides.
The following example .earth file will set up a spherical rendering of Earth with textures from openstreetmap.org:
<map name="OpenStreetMap" type="geocentric" version="2" > <image name="osm_mapnik" driver="xyz" > <url>http://[abc].tile.openstreetmap.org/z/x/y.png</url> </image> </map>
Elevation data can also be acquired in a similarly simple fashion using the elevation tag. The next snippet demonstrates this:
<map name="readymap.org" type="geocentric" version="2" > <image name="readymap_imagery" driver="tms" > <url>http://readymap.org/readymap/tiles/1.0.0/7/</url> </image> <elevation name="readymap_elevation" driver="tms" > <url>http://readymap.org/readymap/tiles/1.0.0/9/</url> </elevation> </map>
For a detailed description of the available image and elevation source drivers, refer to the online references of osgEarth, or use one of the sample .earth files shipped with it.
The following partial .earth file places a label over Los Angeles, an extruded ellipse (a hollow cylinder) next to it, and a big red flag nearby.
<map ... > ... <external> <annotations> <label text="Los Angeles" > <position lat="34.051" long="-117.974" alt="100" mode="relative"/> </label> <ellipse name="ellipse extruded" > <position lat="32.73" long="-119.0"/> <radius_major value="50" units="km"/> <radius_minor value="20" units="km"/> <style type="text/css" > fill: #ff7f007f; stroke: #ff0000ff; extrusion-height: 5000; </style> </ellipse> <model name="flag model" > <url>flag.osg.18000.scale</url> <position lat="33" long="-117.75" hat="0"/> </model> </annotations> </external> </map>
Being able to use online map providers is very convenient, but it is often more desirable to use an offline map resource. Doing so not only makes the simulation usable without internet access, but also speeds up map loading and insulates the simulation against changes in the online environment (availablity, content and configuration changes of map servers).
There are two ways map data may come from the local disk: caching, and using a self-contained offline map package. In this section we'll cover the latter, and show how you can create an offline map package from online sources, using the command line tool called osgearth_package. The resulting package, unlike map cache, will also be redistributable.
Given the right arguments, osgearth_package will download the tiles that make up the map, and arrange them in a fairly standardized, self-contained package. It will also create a corresponding .earth file that can be later used just like any other.
For example, the osg-earth sample simulation uses a tile package which has been created with a command similar to this one:
$ osgearth_package --tms boston.earth --out offline-tiles \ --bounds -71.0705566406 42.350425122434 -71.05957031 42.358543917497 \ --max-level 18 --out-earth boston_offline.earth --mt --concurrency 8
The --tms boston.earth arguments mean that we want to create a package in TMS format from the input file boston.earth. The --out offline-tiles argument specifies the output directory.
The --bounds argument specifies the rectangle of the map we want to include in the package, in the order xmin ymin xmax ymax order, as standard WGS84 datum (longitude/latitude). These example coordinates include the Boston Common area, used in some samples. The size of this rectangle obviously has a big impact on the size of the resulting package.
The --max-level 18 argument is the maximum level of detail to be saved. This is a simple way of adjusting the tradeoff between quality and required disk space. Values between 15 and 20 are generally suitable, depending on the size of the target area and the available storage capacity.
The --out-earth boston_offline.earth option tells the utility to generate an .earth file with the given name in the output directory that references the prepared tile package as image source.
The --mt --concurrency 8 arguments will make the process run in multithreaded mode, using 8 threads, potentially speeding it up.
The tool has a few more options for controlling the image format and compression mode among others. Consult the documentation for details, or the short usage help accessible with the -h switch.
To easily position a part of the scene together on a given geographical location, an osgEarth::GeoTransform is of great help. It takes geographical coordinates (longitude/latitude/altitude), and creates a simple Cartesian coordinate system centered on the given location, in which all of its children can be positioned painlessly, without having to worry about further coordinate transformations between Cartesian and geographic systems. To move and orient the children within this local system, osg::PositionAttitudeTransform can be used.
osgEarth::GeoTransform *geoTransform = new osgEarth::GeoTransform(); osg::PositionAttitudeTransform *localTransform = new osg::PositionAttitudeTransform(); mapNode->getModelLayerGroup()->addChild(geoTransform); geoTransform->addChild(localTransform); localTransform->addChild(objectNode); geoTransform->setPosition(osgEarth::GeoPoint(mapNode->getMapSRS(), longitude, latitude, altitude)); localTransform->setAttitude(osg::Quat(heading, osg::Vec3d(0, 0, 1)));
To display additional information on top of the terrain, annotations can be used. These are special objects that can adapt to the shape of the surface. Annotations can be of many kinds, for example simple geometric shapes like circles, ellipses, rectangles, lines, polygons (which can be extruded upwards to make solids); texts or labels, arbitrary 3D models, or images projected onto the surface.
All the annotations that can be created declaratively from an .earth file, can also be programatically generated at runtime.
This example shows how the circular transmission ranges of the cows in the osg-earth sample are created in the form of a osgEarth::Annotation::CircleNode annotation. Some basic styling is applied to it using an osgEarth::Style, and the rendering technique to be used is specified.
auto scene = ...; auto mapNode = osgEarth::MapNode::findMapNode(scene); auto geoSRS = mapNode->getMapSRS()->getGeographicSRS(); osgEarth::Style rangeStyle; rangeStyle.getOrCreate<PolygonSymbol>()->fill()->color() = osgEarth::Color(rangeColor); rangeStyle.getOrCreate<AltitudeSymbol>()->clamping() = AltitudeSymbol::CLAMP_TO_TERRAIN; rangeStyle.getOrCreate<AltitudeSymbol>()->technique() = AltitudeSymbol::TECHNIQUE_DRAPE; rangeNode = new osgEarth::Annotation::CircleNode(mapNode.get(), osgEarth::GeoPoint:(geoSRS, longitude, latitude), osgEarth::Linear(radius, osgEarth::Units::METERS), rangeStyle); mapNode->getModelLayerGroup()->addChild(rangeNode);
Loading and manipulating OSG models:
Creating 3D models for OpenSceneGraph using Blender:
osgEarth online documentation:
Be sure to check the samples coming with the OpenSceneGraph installation, as they contain invaluable information.
The following books can be useful for more complex visualization tasks:
This book is a concise introduction to the OpenSceneGraph API. It can be purchased from http://www.osgbooks.com, and it is also available as a free pdf download.
This book is a concise introduction to the main features of OpenSceneGraph which then leads the reader into the fundamentals of developing virtual reality applications. Practical instructions and explanations accompany every step.
This book contains 100 recipes in 9 chapters, focusing on different fields including the installation, nodes, geometries, camera manipulation, animations, effects, terrain building, data management, GUI integration.
This chapter describes the process and tools for building executable simulation models from their source code.
As described in the the previous chapters, the source of an OMNeT++ model usually contains the following files:
The process to turn the source into an executable form is this, in nutshell:
Note that apart from the first step, the process is the same as building any C/C++ program. Also note that NED and ini files do not play a part in this process, as they are loaded by the simulation program at runtime.
One needs to link with the following libraries:
The exact files names of libraries depend on the platform and a number of
additional factors.
Figure below shows an overview of the process of building (and running) simulation programs.
You can see that the build process is not complicated. Tools such as make and opp_makemake, to be described in the rest of the chapter, are primarily needed to optimize rebuilds (if a message file has been translated already, there is no need to repeat the translation for every build unless the file has changed), and for automation.
There are several tools available for managing the build of C/C++ programs. OMNeT++ uses the traditional way, Makefiles. Writing a Makefile is usually a tedious task. However, OMNeT++ provides a tool that can generate the Makefile for the user, saving manual labour.
opp_makemake can automatically generate a Makefile for simulation programs, based on the source files in the current directory and (optionally) in subdirectories.
The most important options accepted by opp_makemake are:
There are several other options; run opp_makemake -h to see the complete list.
Assuming the source files (*.ned, *.msg, *.cc, *.h) are located in a single directory, one can change to that directory and type:
$ opp_makemake
This will create a file named Makefile. Now, running the make program will build a simulation executable.
$ make
To regenerate an existing Makefile, add the -f option to the command line, otherwise opp_makemake will refuse overwriting it.
$ opp_makemake -f
The name of the output file will be derived from the name of the project directory (see later). It can be overridden with the -o option:
$ opp_makemake -f -o aloha
The generated Makefile supports the following targets:
opp_makemake generates a Makefile that can create both release and debug builds. By default it creates release version, but it is easy to override this behavior by defining the MODE variable on the make command line.
$ make MODE=debug
It is also possible to generate a Makefile that defaults to debug builds. This can be achieved by adding the --mode option to the opp_makemake command line.
$ opp_makemake --mode debug
opp_makemake generates a Makefile that prints only minimal information during the build process (only the name of the compiled file.) To see the full compiler commands executed by the Makefile, add the V=1 parameter to the make command line.
$ make V=1
If the simulation model relies on an external library, the following opp_makemake options can be used to make the simulation link with the library.
For example, linking with a hypothetical Foo library installed under opt/ might require the following additional opp_makemake options: -I/opt/foo/include -L/opt/foo/lib -lfoo.
It is possible to build a whole source directory tree with a single Makefile. A source tree will generate a single output file (executable or library). A source directory tree will always have a Makefile in its root, and source files may be placed anywhere in the tree.
To turn on this option, use the opp_makemake --deep option. opp_makemake will collect all .cc and .msg files from the whole subdirectory tree, and generate a Makefile that covers all. To exclude a specific directory, use the -X exclude/dir/path option. (Multiple -X options are accepted.)
An example:
$ opp_makemake -f --deep -X experimental -X obsolete
In the C++ code, include statements should contain the location of the file
relative to the Makefile's location.
#include "utils/common/Foo.h"
The make program can utilize dependency information in the Makefile to shorten build times by omitting build steps whose input has not changed since the last build. Dependency information is automatically created and kept up-to-date during the build process.
Dependency information is kept in .d files in the output directory.
The build system creates object and executable files in a separate directory, called the output directory. By default, the output directory is out/<configname>, where the <configname> part depends on the compiler toolchain and build mode settings. (For example, the result of a debug build with GCC will be placed in out/gcc-debug.) The subdirectory tree inside the output directory will mirror the source directory structure.
By default, the out directory is placed in the project root directory. This location can be changed with opp_makemake's -O option.
$ opp_makemake -O ../tmp/obj
By default the Makefile will create an executable file, but it is also possible to build shared or static libraries. Shared libraries are usually a better choice.
Use --make-so to create shared libraries, and --make-lib to build static libraries. The --nolink option completely omits the linking step, which is useful for top-level Makefiles that only invoke other Makefiles, or when custom linking commands are needed.
The --recurse option enables recursive make; when you build the simulation, make descends into the subdirectories and runs make in them too. By default, --recurse decends into all subdirectories; the -X <dir> option can be used to make it ignore certain subdirectories. This option is especially useful for top level Makefiles.
The --recurse option automatically discovers subdirectories, but this is sometimes inconvenient. Your source directory tree may contain parts which need their own hand written Makefile. This can happen if you include source files from an other non OMNeT++ project. With the -d <dir> or --subdir <dir> option, you can explicitly specify which directories to recurse into, and also, the directories need not be direct children of the current directory.
The recursive make options (--recurse, -d, --subdir) imply -X, that is, the directories recursed into will be automatically excluded from deep Makefiles.
You can control the order of traversal by adding dependencies into the makefrag file (see [9.2.11])
Motivation for recursive builds:
It is possible to add rules or otherwise customize the generated Makefile by providing a makefrag file. When you run opp_makemake, it will automatically insert the content of the makefrag file into the resulting Makefile. With the -i option, you can also name other files to be included into the Makefile.
makefrag will be inserted after the definitions but before the first rule, so it is possible to override existing definitions and add new ones, and also to override the default target.
makefrag can be useful if some of your source files are generated from other files (for example, you use generated NED files), or you need additional targets in your Makefile or just simply want to override the default target in the Makefile.
In the case of a large project, your source files may be spread across several directories and your project may generate more than one executable file (i.e. several shared libraries, examples etc.).
Once you have created your Makefiles with opp_makemake in every source directory tree, you will need a toplevel Makefile. The toplevel Makefile usually calls only the Makefiles recursively in the source directory trees.
For a complex example of using opp_makemake, we will show how to create the Makefiles for a large project. First, take a look at the project's directory structure and find the directories that should be used as source trees:
project/ doc/ images/ simulations/ contrib/ <-- source tree (build libmfcontrib.so from this dir) core/ <-- source tree (build libmfcore.so from this dir) test/ <-- source tree (build testSuite executable from this dir)
Additionally, there are dependencies between these output files: mfcontrib requires mfcore and testSuite requires mfcontrib (and indirectly mfcore).
First, we create the Makefile for the core directory. The Makefile will build a shared lib from all .cc files in the core subtree, and will name it mfcore:
$ cd core && opp_makemake -f --deep --make-so -o mfcore -O out
The contrib directory depends on mfcore, so we use the -L and -l options to specify the library we should link with.
$ cd contrib && opp_makemake -f --deep --make-so -o mfcontrib -O out \ -I../core -L../out/\$\(CONFIGNAME\)/core -lmfcore
The testSuite will be created as an executable file which depends on both mfcontrib and mfcore.
$ cd test && opp_makemake -f --deep -o testSuite -O out \ -I../core -I../contrib -L../out/\$\(CONFIGNAME\)/contrib -lmfcontrib
Now, let us specify the dependencies among the above directories. Add the lines below to the makefrag file in the project root directory.
contrib_dir: core_dir test_dir: contrib_dir
Now the last step is to create a top-level Makefile in the root of the project that calls the previously created Makefiles in the correct order. We will use the --nolink option, exclude every subdirectory from the build (-X.), and explicitly call the above Makefiles using -d <dir>. opp_makemake will automatically include the above created makefrag file.
$ opp_makemake -f --nolink -O out -d test -d core -d contrib -X.
Long compile times are often an inconvenience when working with large OMNeT++-based model frameworks. OMNeT++ has a facility named project features that lets you reduce build times by excluding or disabling parts of a large model library. For example, you can disable modules that you do not use for the current simulation study. The word feature refers to a piece of the project codebase that can be turned off as a whole.
Additional benefits of project features include enforcing cleaner separation of unrelated parts in the model framework, being able to exclude code written for other platforms, and a less cluttered model palette in the NED editor.
Project features can be enabled/disabled from both the IDE and the command line. It is possible to query the list of enabled project features, and use this information in creating a Makefile for the project.
Features can be defined per project. As already mentioned, a feature is a piece of the project codebase that can be turned off as a whole, that is, excluded from the C++ sources (and thus from the build) and also from NED. Feature definitions are typically written and distributed by the author of the project; end users are only presented with the option of enabling/disabling those features. A feature definition contains:
Project features can be queried and manipulated using the opp_featuretool program. The first argument to the program must be a command; the most frequently used ones are list, enable and disable. The operation of commands can be refined with further options. One can obtain the full list of commands and options using the -h option.
Here are some examples of using the program.
Listing all features in the project:
$ opp_featuretool list
Listing all enabled features in the project:
$ opp_featuretool list -e
Enabling all features:
$ opp_featuretool enable all
Disabling a specific feature:
$ opp_featuretool disable Foo
The following command prints the command line options that should be used with opp_makemake to create a Makefile that builds the project with the currently enabled features:
$ opp_featuretool options
The easiest way to pass the output of the above command to opp_makemake is the $(...) shell construct:
$ opp_makemake --deep $(opp_featuretool options)
Often it is convenient to put feature defines (e.g. WITH_FOO) into a header file instead of passing them to the compiler via -D options. This makes it easier to detect feature enablements from derived projects, and also makes it easier for C++ code editors to correctly highlight conditional code blocks that depend on project features.
The header file can be generated with opp_featuretool using the following command:
$ opp_featuretool defines >feature_defines.h
At the same time, -D options must be removed from the compiler command line. opp_featuretool options has switches to filter them out. The modified command for Makefile generation:
$ opp_makemake --deep $(opp_featuretool options -fl)
It is advisable to create a Makefile rule that regenerates the header file when feature enablements change:
feature_defines.h: $(wildcard .oppfeaturestate) .oppfeatures opp_featuretool defines >feature_defines.h
Project features are defined in the .oppfeatures file in your project's root directory. This is an XML file, and it has to be written by hand (there is no specialized editor for it).
The root element is <features>, and it may have several <feature> child elements, each defining a project feature. The fields of a feature are represented with XML attributes; attribute names are id, name, description, initiallyEnabled, requires, labels, nedPackages, extraSourceFolders, compileFlags and linkerFlags. Items within attributes that represent lists (requires, labels, etc.) are separated by spaces.
Here is an example feature from the INET Framework:
<feature id="TCP_common" name="TCP Common" description = "The common part of TCP implementations" initiallyEnabled = "true" requires = "IPv4" labels = "Transport" nedPackages = "inet.transport.tcp_common inet.applications.tcpapp inet.util.headerserializers.tcp" extraSourceFolders = "" compileFlags = "-DWITH_TCP_COMMON" linkerFlags = "" />
Project feature enablements are stored in the .featurestate file.
If you plan to introduce a project feature in your project, here's what you'll need to do:
Configuration and input data for the simulation are in a configuration file usually called omnetpp.ini.
For a start, let us see a simple omnetpp.ini file which can be used to run the Fifo example simulation.
[General] network = FifoNet sim-time-limit = 100h cpu-time-limit = 300s #debug-on-errors = true #record-eventlog = true [Config Fifo1] description = "low job arrival rate" **.gen.sendIaTime = exponential(0.2s) **.gen.msgLength = 100b **.fifo.bitsPerSec = 1000bps [Config Fifo2] description = "high job arrival rate" **.gen.sendIaTime = exponential(0.01s) **.gen.msgLength = 10b **.fifo.bitsPerSec = 1000bps
The file is grouped into sections named [General], [Config Fifo1] and [Config Fifo2], each one containing several entries.
An OMNeT++ configuration file is a line-oriented text file. The encoding is primarily ASCII, but non-ASCII characters are permitted in comments and string literals. This allows for using encodings that are a superset of ASCII, for example ISO 8859-1 and UTF-8. There is no limit on the file size or on the line length.
Comments may be placed at the end of any line after a hash mark, “#”. Comments extend to the end of the line, and are ignored during processing. Blank lines are also allowed and ignored.
Long lines can be broken to multiple lines in two ways: using the traditional trailing backslash notation also found in C/C++, or alternatively, by indenting the continuation lines.
When using the former method, the rule is that if the last character of a line is “\”, it will be joined with the next line after removing the backslash and the newline. (Potential leading whitespace on the second line is preserved.) Note that this allows breaking the line even in the middle of a name, number or string constant.
When using latter method, a line can be broken between any two tokens by inserting a newline and indenting the next line. An indented line is interpreted as a continuation of the previous line. The first line and indented lines that follow it are then parsed as a single multi-line unit. Consequently, this method does not allow breaking a line in the middle of a word or inside string constants.
The two ways of breaking lines can be freely combined.
There are three types of lines: section heading lines, key-value lines, and directive lines:
Key-value lines may not occur above the first section heading line (except in included files, see later).
Keys may be further classified based on syntax alone:
An example:
# This is a comment line [General] # section heading network = Foo # configuration option debug-on-errors = false # another configuration option **.vector-recording = false # per-object configuration option **.app*.typename = "HttpClient" # per-object configuration option **.app*.interval = 3s # parameter value **.app*.requestURL = "http://www.example.com/this-is-a-very-very-very-very\ -very-long-url?q=123456789" # a two-line parameter value
OMNeT++ supports including an ini file in another, via the include keyword. This feature allows one to partition a large ini file into logical units, fixed and varying part, etc.
An example:
# omnetpp.ini ... include params1.ini include params2.ini include ../common/config.ini ...
One can also include files from other directories. If the included ini file further includes others, their path names will be understood as relative to the location of the file which contains the reference, rather than relative to the current working directory of the simulation.
This rule also applies to other file names occurring in ini files (such as the load-libs, output-vector-file, output-scalar-file, etc. options, and xmldoc() module parameter values.)
In included files, it is allowed to have key-value lines without first having a section heading line. File inclusion is conceptually handled as text substitution, except that a section heading in an included file will not change the current section the main file. The following example illustrates the rules:
# incl.ini foo1 = 1 # no preceding section heading: these lines will go into foo2 = 2 # whichever section the file is included into [Config Bar] bar = 3 # this will always go to into [Config Bar]
# omnetpp.ini [General] include incl.ini # adds foo1/foo2 to [General], and defines [Config Bar] w/ bar baz1 = 4 # include files don't change the current section, so these baz2 = 4 # lines still belong to [General]
An ini file may contain a [General] section, and several [<configname>] or [Config <configname>] sections. The use of the Config prefix is optional, i.e. [Foo] and [Config Foo] are equivalent.
The order of the sections is not significant.
The most commonly used options of the [General] section are the following.
Note that the NED files loaded by the simulation may contain several networks, and any of them may be specified in the network option.
Named configurations are in sections of the form [Config <configname>] or [<configname>] (the Config word is optional), where <configname> is by convention a camel-case string that starts with a capital letter: Config1, WirelessPing, OverloadedFifo, etc. For example, omnetpp.ini for an Aloha simulation might have the following skeleton:
[General] ... [Config PureAloha] ... [Config SlottedAloha1] ... [Config SlottedAloha2] ...
Some configuration options (such as user interface selection) are only accepted in the [General] section, but most of them can go into Config sections as well.
When a simulation is run, one needs to select one of the configurations to be activated. In Cmdenv, this is done with the -c command-line option:
$ aloha -c PureAloha
The simulation will then use the contents of the [Config PureAloha] section to set up the simulation. (Qtenv, of course, lets the user choose the configuration from a dialog.)
When the PureAloha configuration is activated, the contents of the [General] section will also be taken into account: if some configuration option or parameter value is not found in [Config PureAloha], then the search will continue in the [General] section. In other words, lookups in [Config PureAloha] will fall back to [General]. The [General] section itself is optional; when it is absent, it is treated like an empty [General] section.
All named configurations fall back to [General] by default. However, for each configuration it is possible to specify the fall-back section or a list of fallback sections explicitly, using the extends key. Consider the following ini file skeleton:
[General] ... [Config SlottedAlohaBase] ... [Config LowTrafficSettings] ... [Config HighTrafficSettings] ... [Config SlottedAloha1] extends = SlottedAlohaBase, LowTrafficSettings ... [Config SlottedAloha2] extends = SlottedAlohaBase, HighTrafficSettings ... [Config SlottedAloha2a] extends = SlottedAloha2 ... [Config SlottedAloha2b] extends = SlottedAloha2 ...
When SlottedAloha2b is activated, lookups will consider sections in the following order (this is also called the section fallback chain): SlottedAloha2b, SlottedAloha2, SlottedAlohaBase, HighTrafficSettings, General.
The effect is the same as if the contents of the sections SlottedAloha2b, SlottedAloha2, SlottedAlohaBase, HighTrafficSettings and General were copied together into one section, one after another, [Config SlottedAloha2b] being at the top, and [General] at the bottom. Lookups always start at the top, and stop at the first matching entry.
The order of the sections in the fallback chain is computed using the C3 linearization algorithm ([Barrett1996]):
The fallback chain of a configuration A is
The section fallback chain can be printed by the -X option of the command line of the simulation program:
$ aloha -X SlottedAloha2b OMNeT++ Discrete Event Simulation ... Config SlottedAloha2b Config SlottedAloha2 Config SlottedAlohaBase Config HighTrafficSettings General
The section fallback concept is similar to multiple inheritance in object-oriented languages, and benefits are similar too; one can factor out the common parts of several configurations into a “base” configuration, and additionally, one can reuse existing configurations without copying, by using them as a base. In practice one will often have “abstract” configurations too (in the C++/Java sense), which assign only a subset of parameters and leave the others open, to be assigned in derived configurations.
When experimenting with a lot of different parameter settings for a simulation model, file inclusion and section inheritance can make it much easier to manage ini files.
Simulations get input via module parameters, which can be assigned a value in NED files or in omnetpp.ini -- in this order. Since parameters assigned in NED files cannot be overridden in omnetpp.ini, one can think about them as being “hardcoded”. In contrast, it is easier and more flexible to maintain module parameter settings in omnetpp.ini.
In omnetpp.ini, module parameters are referred to by their full paths (hierarchical names). This name consists of the dot-separated list of the module names (from the top-level module down to the module containing the parameter), plus the parameter name (see section [7.1.2.2]).
An example omnetpp.ini which sets the numHosts parameter of the toplevel module and the transactionsPerSecond parameter of the server module:
[General] Network.numHosts = 15 Network.server.transactionsPerSecond = 100
Typename pattern assignments are also accepted:
[General] Network.host[*].app.typename = "PingApp"
Models can have a large number of parameters to be configured, and it would be tedious to set them one-by-one in omnetpp.ini. OMNeT++ supports wildcard patterns which allow for setting several model parameters at once. The same pattern syntax is used for per-object configuration options; for example <object-path-pattern>.record-scalar, or <module-path-pattern>.rng-<N>.
The pattern syntax is a variation on Unix glob-style patterns. The most apparent differences to globbing rules are the distinction between * and **, and that character ranges should be written with curly braces instead of square brackets; that is, any-letter is expressed as {a-zA-Z} and not as [a-zA-Z], because square brackets are reserved for the notation of module vector indices.
Pattern syntax:
The order of entries is very important with wildcards. When a key matches several wildcard patterns, the first matching occurrence is used. This means that one needs to list specific settings first, and more general ones later. Catch-all settings should come last.
An example ini file:
[General] *.host[0].waitTime = 5ms # specifics come first *.host[3].waitTime = 6ms *.host[*].waitTime = 10ms # catch-all comes last
The * wildcard is for matching a single module or parameter name in the path name, while ** can be used to match several components in the path. For example, **.queue*.bufSize matches the bufSize parameter of any module whose name begins with queue in the model, while *.queue*.bufSize or net.queue*.bufSize selects only queues immediately on network level. Also note that **.queue**.bufSize would match net.queue1.foo.bar.bufSize as well!
Sets and negated sets can contain several character ranges and also enumeration of characters. For example, {_a-zA-Z0-9} matches any letter or digit, plus the underscore; {xyzc-f} matches any of the characters x, y, z, c, d, e, f. To include '-' in the set, put it at a position where it cannot be interpreted as character range, for example: {a-z-} or {-a-z}. To include '}' in the set, it must be the first character: {}a-z}, or as a negated set: {^}a-z}. A backslash is always taken as a literal backslash (and not as an escape character) within set definitions.
Only nonnegative integers can be matched. The start or the end of the range (or both) can be omitted: {10..}, {..99} or {..} are valid numeric ranges (the last one matches any number). The specification must use exactly two dots. Caveat: *{17..19} will match a17, 117 and 963217 as well, because the * can also match digits!
An example for numeric ranges:
[General] *.*.queue[3..5].bufSize = 10 *.*.queue[12..].bufSize = 18 *.*.queue[*].bufSize = 6 # this will only affect queues 0,1,2 and 6..11
It is also possible to utilize the default values specified in the NED files. The <parameter-fullpath>=default setting assigns the default value to a parameter if it has one.
The <parameter-fullpath>=ask setting will try to get the parameter value interactively from the user.
If a parameter was not set but has a default value, that value will be assigned. This is like having a **=default line at the bottom of the [General] section.
If a parameter was not set and has no default value, that will either cause an error or will be interactively prompted for, depending on the particular user interface.
More precisely, parameter resolution takes place as follows:
It is quite common in simulation studies that the simulation model is run several times with different parameter settings, and the results are analyzed in relation to the input parameters. OMNeT++ 3.x had no direct support for batch runs, and users had to resort to writing shell (or Python, Ruby, etc.) scripts that iterated over the required parameter space, to generate a (partial) ini file and run the simulation program in each iteration.
OMNeT++ 4.x largely automates this process, and eliminates the need for writing batch execution scripts. It is the ini file where the user can specify iterations over various parameter settings. Here is an example:
[Config AlohaStudy] *.numHosts = ${1, 2, 5, 10..50 step 10} **.host[*].generationInterval = exponential(${0.2, 0.4, 0.6}s)
This parameter study expands to 8*3 = 24 simulation runs, where the number of hosts iterates over the numbers 1, 2, 5, 10, 20, 30, 40, 50, and for each host count three simulation runs will be done, with the generation interval being exponential(0.2), exponential(0.4), and exponential(0.6).
How can it be used? First of all, running the simulation program with the -q numruns option will print how many simulation runs a given configuration expands to.
$ ./aloha -c AlohaStudy -q numruns OMNeT++ Discrete Event Simulation ... Config: AlohaStudy Number of runs: 24
When -q runs is used instead, the program will print the list of runs, with the values of the iteration variables for each run. (Use -q rundetails to get even more info.) Note that the parameter study actually maps to nested loops, with the last ${...} becoming the innermost loop. The iteration variables are just named $0 and $1 -- we'll see that it is possible to give meaningful names to them. Please ignore the $repetition=0 part in the printout for now.
$ ./aloha -c AlohaStudy -q runs OMNeT++ Discrete Event Simulation ... Config: AlohaStudy Number of runs: 24 Run 0: $0=1, $1=0.2, $repetition=0 Run 1: $0=1, $1=0.4, $repetition=0 Run 2: $0=1, $1=0.6, $repetition=0 Run 3: $0=2, $1=0.2, $repetition=0 Run 4: $0=2, $1=0.4, $repetition=0 Run 5: $0=2, $1=0.6, $repetition=0 Run 6: $0=5, $1=0.2, $repetition=0 Run 7: $0=5, $1=0.4, $repetition=0 ... Run 19: $0=40, $1=0.4, $repetition=0 Run 20: $0=40, $1=0.6, $repetition=0 Run 21: $0=50, $1=0.2, $repetition=0 Run 22: $0=50, $1=0.4, $repetition=0 Run 23: $0=50, $1=0.6, $repetition=0
Any of these runs can be executed by passing the -r <runnumber> option to Cmdenv. So, the task is now to run the simulation program 24 times, with -r running from 0 through 23:
$ ./aloha -u Cmdenv -c AlohaStudy -r 0 $ ./aloha -u Cmdenv -c AlohaStudy -r 1 $ ./aloha -u Cmdenv -c AlohaStudy -r 2 ... $ ./aloha -u Cmdenv -c AlohaStudy -r 23
This batch can be executed either from the OMNeT++ IDE (where you are prompted to pick an executable and an ini file, choose the configuration from a list, and just click Run), or using a little command-line batch execution tool (opp_runall) supplied with OMNeT++.
Actually, it is also possible to make Cmdenv execute all runs in one go, by simply omitting the -r option.
$ ./aloha -u Cmdenv -c AlohaStudy OMNeT++ Discrete Event Simulation Preparing for running configuration AlohaStudy, run #0... ... Preparing for running configuration AlohaStudy, run #1... ... ... Preparing for running configuration AlohaStudy, run #23...
However, this approach is not recommended, because it is more susceptible to C++ programming errors in the model. (For example, if any of the runs crashes, the whole batch stops -- which may not be what the user wants.)
Let us return to the example ini file in the previous section:
[Config AlohaStudy] *.numHosts = ${1, 2, 5, 10..50 step 10} **.host[*].generationInterval = exponential( ${0.2, 0.4, 0.6}s )
The ${...} syntax specifies an iteration. It is sort of a macro: at each run, the whole ${...} string is textually replaced with the current iteration value. The values to iterate over do not need to be numbers (although the "a..b" and "a..b step c" forms only work on numbers), and the substitution takes place even inside string constants. So, the following examples are all valid (note that textual substitution is used):
*.param = 1 + ${1e-6, 1/3, sin(0.5)} ==> *.param = 1 + 1e-6 *.param = 1 + 1/3 *.param = 1 + sin(0.5) *.greeting = "We will simulate ${1,2,5} host(s)." ==> *.greeting = "We will simulate 1 host(s)." *.greeting = "We will simulate 2 host(s)." *.greeting = "We will simulate 5 host(s)."
To write a literal ${..} inside a string constant, quote the left brace with a backslash: $\{..}.
To include a literal comma or close-brace inside a value, one needs to escape it with a backslash: ${foo\,bar\}baz} will parse as a single value, foo,bar}baz. Backslashes themselves must be doubled. As the above examples illustrate, the parser removes one level of backslashes, except inside string literals where they are left intact.
One can assign names to iteration variables, which has the advantage that meaningful names will be displayed in the Cmdenv output instead of $0 and $1, and also lets one reference iteration variables at other places in the ini file. The syntax is ${<varname>=<iteration>}, and variables can be referred to simply as ${<varname>}:
[Config Aloha] *.numHosts = ${N=1, 2, 5, 10..50 step 10} **.host[*].generationInterval = exponential( ${mean=0.2, 0.4, 0.6}s ) **.greeting = "There are ${N} hosts"
The scope of the variable name is the section that defines it, plus sections based on that section (via extends).
Iterations may refer to other iteration variables, using the dollar syntax ($var) or the dollar-brace syntax (${var}).
This feature makes it possible to have loops where the inner iteration range depends on the outer one. An example:
**.foo = ${i=1..10} # outer loop **.bar = ${j=1..$i} # inner loop depends on $i
When needed, the default top-down nesting order of iteration loops is modified (loops are reordered) to ensure that expressions only refer to more outer loop variables, but not to inner ones. When this is not possible, an error is generated with the “circular dependency” message.
For instance, in the following example the loops will be nested in k - i - j order, k being the outermost and j the innermost loop:
**.foo = ${i=0..$k} # must be inner to $k **.bar = ${j=$i..$k} # must be inner to both $i and $k **.baz = ${k=1..10} # may be the outermost loop
And the next example will stop with an error because there is no “good” ordering:
**.foo = ${i=0..$j} **.bar = ${j=0..$k} **.baz = ${k=0..$i} # --> error: circular references
Variables are substituted textually, and the result is normally not evaluated as an arithmetic expression. The result of the substitution is only evaluated where needed, namely in the three arguments of iteration ranges (from, to, step), and in the value of the constraint configuration option.
To illustrate textual substitution, consider the following contorted example:
**.foo = ${i=1..3, 1s+, -}001s
Here, the foo NED parameter will receiving the following values in subsequent runs: 1001s, 2001s, 3001s, 1s+001s, -001s.
**.foo = ${i=10} **.bar = ${j=$i+5} **.baz = ${k=2*$j} # bogus! $j should be written as ($j) constraint = $i+50 < 2*$j # ditto: should use ($i) and ($j)
Here, the baz parameter will receive the string 2*10+5 after the substitutions and hence evaluate to 25 instead of the correct 2*(10+5)=30; the constraint expression is similarly wrong. Mind the parens!
Substitution also works inside string constants within iterations (${..}).
**.foo = "${i=Jo}" # -> Jo **.bar = ${"Hi $i", "Hi ${i}hn"} # -> Hi Jo /John
However, outside iterations the plain dollar syntax is not understood, only the dollar-brace syntax is:
**.foo = "${i=Day}" **.baz = "Good $i" # -> remains "Good $i" **.baz = "Good ${i}" # -> becomes "Good Day"
The body of an iteration may end in an exclamation mark followed by the name of another iteration variable. This syntax denotes a parallel iteration. A parallel iteration does not define a loop of its own, but rather, the sequence is advanced in lockstep with the variable after the “!”. In other words, the “!” syntax chooses the kth value from the iteration, where k is the position (iteration count) of the iteration variable after the “!”.
An example:
**.plan = ${plan= "A", "B", "C", "D"} **.numHosts = ${hosts= 10, 20, 50, 100 ! plan} **.load = ${load= 0.2, 0.3, 0.3, 0.4 ! plan}
In the above example, the only loop is defined by the first line, the plan variable. The other two iterations, hosts and load just follow it; for the first value of plan the first values of host and load are selected, and so on.
There are a number of predefined variables: ${configname} and ${runnumber} with the obvious meanings; ${network} is the name of the network that is simulated; ${processid} and ${datetime} expand to the OS process id of the simulation and the time it was started; and there are some more: ${runid}, ${iterationvars} and ${repetition}.
${runid} holds the run ID. When a simulation is run, a run ID is assigned that uniquely identifies that instance of running the simulation: every subsequent run of the same simulation will produce a different run ID. The run ID is generated as the concatenation of several variables like ${configname}, ${runnumber}, ${datetime} and ${processid}. This yields an identifier that is unique “enough” for all practical purposes, yet it is meaningful for humans. The run ID is recorded into result files written during the simulation, and can be used to match vectors and scalars written by the same simulation run.
In cases when not all combinations of the iteration variables make sense or need to be simulated, it is possible to specify an additional constraint expression. This expression is interpreted as a conditional (an "if" statement) within the innermost loop, and it must evaluate to true for the variable combination to generate a run. The expression should be given with the constraint configuration option. An example:
*.numNodes = ${n=10..100 step 10} **.numNeighbors = ${m=2..10 step 2} constraint = ($m) <= sqrt($n) # note: parens needed due to textual substitution
The expression syntax supports most C language operators including boolean, conditional and binary shift operations, and most <math.h> functions; data types are boolean, double and string. The expression must evaluate to a boolean.
It is directly supported to perform several runs with the same parameters but different random number seeds. There are two configuration options related to this: repeat and seed-set