Database Distributor Unit (DBDU)
The database distributor unit of the HLR has as its main responsibility to distribute HLR/AUC subscriber related data to the correct unit (HLRU/ACU pair). This information can come from either the network or administrative unit (MML/AdC, etc.). The DBDU pair has duplicated mass memory.
The DBDU is backed-up with the 2n redundancy principle.
CMM = Central Memoryand Marker
CMM (Central Memory and Marker) units
ET (Exchange Terminal)
CCSU (CCS7 Signalling Units)
STU (Statistical Units)
DBDU (Database Distributor Units) - The database distributor unit of the SRRi has as its main responsibility to distribute subscriber related data to the correct SRRU pair.
One CLSU (Clock System Unit)
OMU (Operation and Maintenance Unit)
GSW (Group Switches)
SRRU (Service Routing Register Units): The SRRU is responsible for the number translations and subscriber information. SRRU is also responsible for updating, removing and retrieval of subscriber data.
Broadband access systems (Fast Internet xDSL access) volume deployment is starting this year. The massive service rollout will happen in 2000 - 2001. Our target is to be a leading supplier in Fast Internet Solutions (xDSL access networks).
We will offer IP connectivity that can be adapted to existing infrastructure and business environment.
We will develop feature and applications to our solution to support new IP services, e.g. remote work, IP Virtual Private Network (VPN), IP telephony and video streaming.
Push to talk Register in Push to talk Core Network
Nokia Push to talk Register is a part of Push to talk over Cellular System, which enables Push to talk Over Cellular (PoC) service. To summarise, Push to talk Register
stores the provisioning data (users, groups and folders) of Push to talk Core Network to the provisioning database
implements Provisioning and Management Functions (PMF), which distributes provisioning data to the management plane of Push to talk Core Network
provides an HTTP human user interface and an application programming interface (API) called Provisioning Interface, which enables the connection between the network operators' provisioning systems and Push to talk Core Network.
Push to talk Core Network is connected to the operator's provisioning system through Push to talk Register. Push to talk Register implements the provisioining plane of Push to talk Core Network. The main functionality of Push to talk Register is to offer provisoning capabilites for PoC service. Push to talk Register is a centralised element where user data is stored in Push to talk Core Network.
Push to talk Register connects directly through management plane to the PoC Call Processor. Push to talk Register also offers an application programming interface (API) to make the connection between the operator's provisioning systems and Push to talk Core Network possible. This API is called PoC Provisioning interface.
Management Users (MU) can also log into PoC System using the same PoC Provisioning interface. The Management Users can then do subscription management operations according to the management rights and folder parameters. Directly connected Management Users can also do network management and troubleshooting operations.
Intelligent Service Node (ISN) is the combination of GGSN, TA and CA.
The block diagram shows the funtional units, the main plug-in units and the schematic connections.
These are going to be explained in detail later.
The forth important aspect is how the network is built up. The flexibility has significant economic impact on total network cost.
This flexibility allows operators to achieve the capacity they need as they need it,without the expense of carrying surplus capacity.
RNC can be located at Core Network or Remote sites . RNC location can be optimised by thinking transmission or other requirements. Both site location options are available for the operator and normal site operating enviroment conditions are valid for both site options, please see RNC product description as well.
RNC processing performance (in terms of Mbit/s) gives a flexibility to accommodate different number of CUs (BTS) in different RNC serving areas:
In city center, hot spot capacity case, the appropriate RNC serving area is some hundred CU s with 196 Mbit/s RNC processing capacity (High traffic volume, e.g. 196 Mbit/s capacity via BTSs towards the RNC).
In coverage oriented case, the appropriate RNC serving area can be around 1000 CU s with needed 196Mbit/s RNC processing capacity (light traffic via BTS towards the RNC and many BTS can be used under the RNC).
RNC architecture is highly modular in order to provide flexibility and to support different services. For example, when data service mix is growing, there is no need to change or install new HW units into the RNC.
For logical Iu, Iur and Iub interface both interfaces are available: SDH and/or PDH over ATM
For O&M purposes IP over Ethernet or ATM is available.
The flexibility has significant economic impact on total network cost.
RNC is fault tolerant packet switching platform (switching is based on ATM tecnology). RNC functionalities are optimised by using of DSP and Intel based processors. All relevat functions are protected by 2N or N+1 or SN+ redundancy pricinples. New requirements into future for RNC will be implemented by using all RNC benefits which modular architecture will offer (scalability, flexibility, different DSP and Intel CPUs and modular SW architecture) .