By Lavi Semel, CTO, Altair Semiconductor
There are many hurdles to overcome on the path to developing a successful cellular IoT device.
Today, the IoT device and smartphone ecosystems are worlds apart
The cellular IoT market is very different from the smart phone market. With varying requirements & challenges, there is no one size fits all. KPIs such as battery life, size, latency, location accuracy and radio conditions all change according to use case. The fact there is no human access for many of the use cases, leads to challenges of reliability, monitoring, security, and battery life.
What are the main challenges of Cellular IoT?
The main challenges to cellular IoT relate to the lack of physical access, and the need to sustain (sometimes) extreme radio & ambient conditions for a very long time. How can we ensure that radio conditions will remain good enough for 15 years? Would the network coverage remain the same? At what cost? How do we monitor, debug or upgrade the devices? Are there long-term (e.g. seasonal) implications on device behavior & power consumption?
Global connectivity challenges – With smartphones, there is human access involved, rechargeable batteries, and no issues with connectivity. Users can select which networks to search and lock on to, and when to do so. By contrast, a cellular IoT device must autonomously decide when to search for a network and which to connect to – based on low data-connectivity costs, and ‘good enough’ coverage. In addition, the need to support more than one RAT (radio access technology, e.g. NB-IoT and CAT-M and sometimes also 2G) doesn’t make it easier.
The limited /out of coverage problem is much more severe than with broadband. Since the device is static, has one antenna, and with limited bandwidth, it is sensitive to coverage and interference.
Battery life is sometimes the most important KPI, yet it is very difficult to predict power consumption in a cellular environment. It is affected by many factors, including; chipset, wireless conditions, network parameters & application. Power consumption may change drastically between carriers and within the same carrier (between eNB vendors).
Reducing the device’s size and cost is another challenge. Manufacturers need to look at the whole BOM structure. A high level of integration reduces costs, eases development and reduces size.
There are also many security challenges. Aggressive cost targets impact memory sizes, CPU performance, certification and the production process. Devices sleep most of the time and connection is not maintained. There is a strict limit on the amount of information that can be communicated. Legacy protocols may be inefficient and power-hungry.
Top Things to Consider When Building a Cellular-Based IoT Device
Below are a few tips on what to pay attention to when building a cellular-based IoT device.
Let’s start with the obvious:
- Target battery life: First we need to define the battery lifetime, and its probability. Meaning, do we want to target the 90%-tile, 95%-tile, or the average power consumption of all devices?
- Transmission profiles and message sizes: We need to take security into account. In some cases, a 50B packet becomes the negligible part when security is involved….
- Reachability (i.e. PSM or eDRX): Do we need the device to be reachable? And if so, what is the maximal latency?
- Device conditions – stationary – and if so, where is it located? Mobile? And if so, what is the mobility profile?
Followed by the less obvious….
- Radio technology – NB-IoT or CAT-M?
The industry has developed two standards for radio technology – NB-IoT and CAT-M, with quite a large overlap between them. NB-IoT is targeted for very low data rate applications, usually non-TCP, batch-communication, and focused on stationary devices, such as water meters. CAT-M is targeted for higher data throughputs, mobility and voice. There is a myth that NB-IoT power consumption is lower than CAT-M. This is not always the case. Also, network coverage is not constant, especially if the device is deployed for years. A device deployed for over 2 years in the field might need an upgrade. Doing this over NB-IoT is very challenging, as throughput is low. Therefore, a dual-mode device gives you peace of mind. You enjoy both worlds, at a higher cost (which could pay off, at the end). - Power consumption: Networks today are not optimized for edge IoT devices, as they are tuned to smartphones. This leads to issues with SIM power consumption, network timers,
C-DRX and scheduling parameters. Close discussions between network operators and chipset/module suppliers could overcome some of these challenges. - Transport and Application Protocols: HTTPs is not relevant for IoT devices with battery/data connectivity constraints. MQTT is better, but still not efficient. COAPs (used by LWM2M) is the most (standard-based) efficient protocol.
- Addressing network coverage: Firstly, optimize your network scanning parameters. Then, monitor battery life, integrate a FOTA solution, design a robust connection manager. Finally, test extensively in the field.
- Positioning technologies: In terms of GNSS, in many cases the need is for occasional fixes (non-tracking mode). For cellular-based positioning, NB-IoT/CAT-M networks are new and base station databases are still immature. A multi-mode modem (as opposed to single-RAT) can be used to close this gap.
Work with a trusted cellular IoT partner
As you can see so far, designing a cellular IoT device is very challenging, and you need an experienced partner. As a leading provider of Cellular IoT chipsets, targeting all IoT markets and all relevant use cases, Altair are here to help you achieve your goals.
Designing an IoT device and have some questions? Contact us