Our proposed ABACS physical access control system can successfully process requests and grant, or revoke access based on users, access point and environment attributes (C. Asiminidis, G. Kokkonis and S. Kontogiannis,2018). However, we must verify that the system can withstand a heavy traffic of requests and still respond in short periods of time. Consequently, as Petrakis et al. (2020) mentioned in their research paper and proposed system (iPACS), the server’s access control endpoint needs to be benchmarked to verify its behavior in high-traffic environments. The iPACS system utilized the traditional HTTP protocol to send access requests from users’ mobile device to the fog server. However, our proposed system relies on the CoAP protocol established between the access point unit and the local server. Consequently, our benchmarking procedure utilized different benchmarking tools and procedures. We used the Californium CoAP-Extplugtest software program to conduct our benchmarking like the way Petrakis et al. (2020) obtained their results. Along with the POST request and URI instructions, a payload file containing valid user and access point attributes was attached with the request. In addition, a file containing the PSK information was attached to authenticate our benchmark clients to the server (Bonomi F, Milito R, Zhu J, Addepalli S (2012).
Benchmarking was conducted in two phases. The first phase included sending 200 and 2000 requests with concurrency levels 1, 40 and 120, without restraining the upload data rate, to observe how the backend system handles rapidly incoming requests. The second phase included sending those same requests but with an upload data rate limit, to find out the minimum bandwidth required from the network installation to allow a proper communication between the access point and the local server.
Finally, along the maximal servicing time per request percentile, we also monitor the average CPU and memory consumption in the Raspberry Pi using the Linux top command, which provides exhaustive details about system processes and resource utilization. Finally, data rates were defined using the “NetLimiter” software application. The results of the first phase of benchmarking are presented in Tables 1 to 4. The obtained average times per percentile are included, along the observed data rate and the CPU consumption.
Table 1: 200 CoAPS requests (concurrency = 1)
Percentage of served requests
|
95%
|
99%
|
99.9%
|
100%
|
Average time (ms)
|
Average CPU
|
Time taken at 3.83 KB/s (ms)
|
34
|
44
|
45
|
45
|
22.34
|
19.7 %
|
Table.2: 2000 CoAPS requests (concurrency = 1)
Percentage of served requests
|
95%
|
99%
|
99.9%
|
100%
|
Average time (ms)
|
Average CPU
|
Time taken at 21.36 KB/s (ms)
|
49
|
59
|
74
|
83
|
29.30
|
18.51 %
|
Table.3: 2000 CoAPS requests (concurrency = 40)
Percentage of served requests
|
95%
|
99%
|
99.9%
|
100%
|
Average time (ms)
|
Average CPU
|
Time taken at 62.16 KB/s (ms)
|
744
|
794
|
824
|
837
|
434.41
|
53.28 %
|
Table.4: 2000 CoAPS requests (concurrency = 120)
Percentage of served requests
|
95%
|
99%
|
99.9%
|
100%
|
Average time (ms)
|
Average CPU
|
Time taken at 56.63 KB/s (ms)
|
2’494
|
2’804
|
2’859
|
2’871
|
1’367.22
|
62.22 %
|
It is noticed that the system performs very well with different number of requests and at different concurrency rates. It achieves our target latency included in our non-functional requirements. Sequential requests had to be served under 1 second of average servicing time, which has been achieved, as shown in Tables 1 and 2, where the average time hovered between 20 and 30 milliseconds only. In addition, when it comes to concurrent requests, the target of servicing all requests under an average time of 3 seconds has also been achieved. Tables 3 and 4 show that with high levels of concurrency, 40 and 120, the system still achieves a low average servicing time of 434.41 and 1’367.22 milliseconds respectively.
First, we notice that as the concurrency level rises, the data rate and CPU consumption also rise to meet the demand of sending and processing a high number of incoming requests. It is also worth noting that the CPU consumption does not increase as the number of requests increases in sequential scenarios, most likely because requests are always processed one at a time and increasing their number will not result in a change in the behavior of the system. The system needs to be benchmarked at lower data rates, to evaluate its performance under constrained networks and evaluate the minimum required data rate that still achieves our target average times of 1 and 3 seconds in sequential and concurrent scenarios respectively. To limit the network speed, the NetLimiter software program was used. In every scenario, two graphs were used to represent both the average servicing time and the corresponding CPU consumption as shown in Fig. 8. Results are presented in the form of linear graphs in Figs. 7 to 12.
In Fig. 7, when benchmarking the backend system with 200 requests and concurrency 1, it is noticed that the system achieves the target average time of 1 second between data rates of 0.6 and 0.7 Kbytes per second. With the slight variations and inconsistencies that happen in network quality and latency, it is safe to assume that our system’s breakeven data rate is around 0.7 Kbytes per second in sequential scenarios (concurrency 1).
In Fig. 9, when running 2000 requests with concurrency 40, there is a clear increase in the minimum data rate required to achieve the target time of 3 seconds maximum. In fact, it seems that around 17 Kbytes per second are needed to provide an average servicing time of 3 seconds. This is explained by the fact that requests are now sent concurrently, which overloads the network infrastructure and necessitates higher speeds to successfully transfer all the requests from the client to the server. In addition, Fig. 10 shows that the backend system also needs to process all the incoming requests in a concurrent manner, which consequently increases the CPU consumption as the data rate increases as well.
Finally, in Fig. 11, when running 2000 requests with concurrency 120, the minimum data rate threshold increases even more and seems to hover around 49 Kbytes per second, which is again explained by the sheer number of requests that saturate the network and demand a higher rate to be successfully transferred without errors. The CPU consumption also increases along the data rate proportionally as shown in Fig. 12.
It is important to compare our system performance to an existing system, like iPACS, to evaluate the impact of the use of CoAP and our microservice backend implementation on similar access control systems. Figures 13 and 14 represent the longest performing request of iPACS against our proposed system. We also benchmarked our system with the same data rates observed in the iPACS benchmarking to closely compare our results. The data rates used were relatively low. The data rates are 0.61 KB/s for both sequential scenarios, 7.04 KB/s with concurrency 40 and 7.61 KB/s with concurrency 120. Consequently, it is noticed that our system’s comparative performance varies a lot and heavily depends on the network data rate. When comparing both systems at similar low bandwidths, our system performs slightly better than iPACS in sequential scenarios. However, when processing concurrent requests at the same low data rates, iPACS outperforms our proposed system. This is likely due to the difference in payload format and size between the two systems. Our reliance on the ABAC model means that all access point and user attributes must be sent within the request payload. As mentioned in previous sections, a significant advantage of using the ABAC model over the RBAC model is its flexibility and granularity in access control. ABAC can consider a wider array of attributes, such as user properties, resource characteristics, and environmental conditions, which allows for more fine-grained and context-aware access decisions. It also enhances security by ensuring that access decisions are made based on comprehensive information rather than just predefined roles.
On one hand, the ABAC model certainly provides more versatility but on the other hand, it also noticeably increases the payload size, even though it was largely optimized and reduced. A larger payload size needs a higher bandwidth to be promptly sent without connection timeouts. On the other hand, the iPACS relies on the RBAC model, which bases its decision-making on the code area and the user role only; therefore, reducing the amount of information needed inside the payload. Unfortunately, details about the payload format and size were not exposed in Petrakis et al.’s work (2020). Nonetheless, when removing data rate limits, we notice that our system becomes much more efficient and outperforms iPACS. This confirms that data transmission rates and payload sizes are the main factor behind our system’s performance against iPACS’s. Once communication rates are suitably adapted to our payload requirements, we notice that our proposed system’s backend efficiently processes requests in both sequential and concurrent scenarios. With sequential requests, our proposed system’s longest-performing request outperforms iPACS’s by at least tenfold. With concurrent requests, our system shows a performance at least twice better than iPACS’s.
The main communication protocol in the proposed system between the embedded client and the server is conducted via CoAP instead of HTTP.The CoAP protocol reduces the header size and optimizes the bandwidth consumption, which makes it more efficient than the HTTP protocol. To support these claims, we reattempted the benchmarking on our system with both HTTP and CoAP protocols. The CoAP request is sent to the CoAP server, which in turn communicates with the access control microservice, while the HTTP request is sent to the API gateway, which communicates with the access control microservice too. Both protocols use a secure channel, DTLS and TLS respectively. The results are shown in Table 5.
Table 5: Performance comparison between CoAP and HTTP (average request servicing time)
|
HTTP
|
CoAP
|
200 requests with concurrency 1 (in ms)
|
61.43 at 10.77 KB/s
|
53.99 at 10.77 KB/s
|
2000 requests with concurrency 1 (in ms)
|
54.26 at 12.59 KB/s
|
49.69 at 12.59 KB/s
|
2000 requests with concurrency 40 (in ms)
|
1’257.82 at 47.14 KB/s
|
768.10 at 47.14 KB/s
|
2040 requests with concurrency 120 (in ms)
|
5’521.77 at 30.55 KB/s
|
4’947.14 at 30.55 KB/s
|
We notice that the CoAP communication protocol performs better than the HTTP protocol given the same data rate and communicated payloads. First, the difference between the performance of both protocols in sequential requests is very small, because there is effectively no stress being put on either the server nor the network to communicate and process the request; but the CoAP protocol still sends less data and hence, performs better by a small margin. Secondly, when evaluating concurrent requests, the CoAP protocol surpasses the performance of the HTTP protocol by a large margin. Due to the heavy load put on the network, it is essential that payloads be minimized so that they can be transferred more quickly, and the CoAP protocol achieves this, as demonstrated by our results.