That’s the last part of my review of three Intel-based UP AI development kits, and after testing the UP TWL AI Dev Kit with an Intel N150 CPU and the UP Squared Pro TWL AI Dev Kit with an Intel N150 CPU coupled with an Hailo-8L M.2 AI accelerator, I’ll now report my experience with the high-end UP Xtreme ARL AI Dev Kit with a 14-core Intel Core Ultra 5 225H “Arrow Lake” single board compute…
That’s the last part of my review of three Intel-based UP AI development kits, and after testing the UP TWL AI Dev Kit with an Intel N150 CPU and the UP Squared Pro TWL AI Dev Kit with an Intel N150 CPU coupled with an Hailo-8L M.2 AI accelerator, I’ll now report my experience with the high-end UP Xtreme ARL AI Dev Kit with a 14-core Intel Core Ultra 5 225H “Arrow Lake” single board computer with Intel Arc 130T graphics delivering up to a combined 83 TOPS of AI performance.
I’ve followed the same procedure as with the previous models, using the pre-installed Ubuntu 24.04 Pro operating system to report system information, run some benchmarks, and go through AI workloads using Nx Meta and the AAEON UP AI Toolkit system. It just ran additional benchmarks and tests since it’s my first Intel Arrow Lake platform.
UP Xtreme ARL system information on Ubuntu 24.04
The UP Xtreme ARL SBC (AAEON UPX-ARL01) comes preloaded with Ubuntu 24.04.3 LTS installed on a 256.1 GB NVMe SSD, and the system is powered by a 14-core Intel Core Ultra 225H processor with 16 GB of RAM, as advertised.
The inxi utility provides more insights into the system:
| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253 | devkit@devkit-UP-TWL01:~$ sudo inxi -Fc0System: Host: devkit-UP-TWL01 Kernel: 6.14.0-32-generic arch: x86_64 bits: 64 Console: pty pts/1 Distro: Ubuntu 24.04.3 LTS (Noble Numbat)Machine: Type: Desktop Mobo: AAEON model: UPX-ARL01 v: V1.0 serial: 250129224 UEFI: American Megatrends LLC. v: UXARAM10 date: 07/01/2025CPU: Info: 14-core model: Intel Core Ultra 5 225H bits: 64 type: MCP cache: L2: 22 MiB Speed (MHz): avg: 1235 min/max: 400/6300:6100:2500 cores: 1: 400 2: 400 3: 400 4: 400 5: 400 6: 400 7: 400 8: 400 9: 4286 10: 4301 11: 400 12: 4308 13: 400 14: 400Graphics: Device-1: Intel Arrow Lake-P [Intel Graphics] driver: i915 v: kernel Device-2: Sunplus Innovation FHD Camera driver: snd-usb-audio,uvcvideo type: USB Display: server: X.org v: 1.21.1.11 with: Xwayland v: 23.2.6 driver: gpu: i915 tty: 80x24 resolution: 1920x1080 API: EGL v: 1.5 drivers: iris,swrast platforms: gbm,surfaceless,device API: OpenGL v: 4.6 compat-v: 4.5 vendor: mesa v: 25.0.7-0ubuntu0.24.04.2 note: console (EGL sourced) renderer: Mesa Intel Graphics (ARL), llvmpipe (LLVM 20.1.2 256 bits)Audio: Device-1: Intel driver: snd_hda_intel Device-2: Sunplus Innovation FHD Camera driver: snd-usb-audio,uvcvideo type: USB Device-3: C-Media Audio Adapter (Unitek Y-247A) driver: cmedia_hs100b,snd-usb-audio,usbhid type: USB API: ALSA v: k6.14.0-32-generic status: kernel-apiNetwork: Device-1: Intel driver: e1000e IF: enp0s31f6 state: up speed: 1000 Mbps duplex: full mac: 00:07:32:c8:9f:93 Device-2: Intel Ethernet I226-IT driver: igc IF: enp2s0 state: down mac: 00:07:32:c8:9f:94 IF-ID-1: docker0 state: down mac: c2:e3:61:3e:4b:a7Drives: Local Storage: total: 238.47 GiB used: 16.27 GiB (6.8%) ID-1: /dev/nvme0n1 vendor: Kingston model: OM8PGP4256Q-A0 size: 238.47 GiBPartition: ID-1: / size: 233.39 GiB used: 16.27 GiB (7.0%) fs: ext4 dev: /dev/nvme0n1p2 ID-2: /boot/efi size: 1.05 GiB used: 6.1 MiB (0.6%) fs: vfat dev: /dev/nvme0n1p1Swap: ID-1: swap-1 type: file size: 4 GiB used: 0 KiB (0.0%) file: /swap.imgSensors: System Temperatures: cpu: 40.0 C mobo: N/A Fan Speeds (rpm): N/AInfo: Memory: total: 16 GiB note: est. available: 15.11 GiB used: 1.77 GiB (11.7%) Processes: 333 Uptime: 20m Init: systemd target: graphical (5) Shell: Sudo inxi: 3.3.34 |
Everything appears to be detected properly, including the Gigabit Ethernet port, the 2.5 Gbps Ethernet port, the Sunplus USB FHD camera (UP USB camera), and the Kingston OM8PGP4256Q-A0 SSD. One oddity is the max CPU frequency reported by the system: 6300:6100:2500 for the P-cores, E-cores, and LPE-cores. The P-cores and E-cores are supposed to max out at 4.9 GHz and 4.3 GHz, respectively, so that probably means the OPP tables are wrongly configured, and all benchmarks and utilities will report the wrong frequencies as above.
UP Xtreme ARL benchmarks
Let’s start the benchmarks with Thomas Kaiser’s sbc-bench.sh:
| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112 | evkit@devkit-UP-TWL01:~$ sudo ./sbc-bench.sh -rStarting to examine hardware/software for review purposes...sbc-bench v0.9.72Installing needed tools: apt-get -f -qq -y install powercap-utils links mmc-utils smartmontools stress-ng, p7zip 16.02, tinymembench, ramlat, mhz, cpufetch, cpuminer. Done.Checking cpufreq OPP. Done.Executing tinymembench. Done.Executing RAM latency tester. Done.Executing OpenSSL benchmark. Done.Executing 7-zip benchmark. Done.Throttling test: heating up the device, 5 more minutes to wait. Done.Checking cpufreq OPP again. Done (17 minutes elapsed).Results validation: * Advertised vs. measured max CPU clockspeed: -29.7% before, -29.7% after -> https://tinyurl.com/32w9rr94 * No swapping * Background activity (%system) OK * Powercap detected. Details: "sudo powercap-info -p intel-rapl" -> https://tinyurl.com/4jh9nevj# AAEON UPX-ARL01 V1.0 / Ultra 5 225HTested with sbc-bench v0.9.72 on Sat, 29 Nov 2025 08:19:51 +0100.### General information:The CPU features 3 clusters of different core types: Ultra 5 225H, Kernel: x86_64, Userland: amd64 CPU sysfs topology (clusters, cpufreq members, clockspeeds) cpufreq min max CPU cluster policy speed speed core type 0 0 0 400 6300 Lion Cove 1 0 1 400 6300 Lion Cove 2 0 2 400 6300 Lion Cove 3 0 3 400 6300 Lion Cove 4 0 4 400 6100 Skymont 5 0 5 400 6100 Skymont 6 0 6 400 6100 Skymont 7 0 7 400 6100 Skymont 8 0 8 400 6100 Skymont 9 0 9 400 6100 Skymont 10 0 10 400 6100 Skymont 11 0 11 400 6100 Skymont 12 0 12 400 2500 Skymont 13 0 13 400 2500 Skymont15473 KB available RAM### Policies (performance vs. idle consumption):Status of performance related policies found below /sys: /sys/module/pcie_aspm/parameters/policy: [default] performance powersave powersupersave### Clockspeeds (idle vs. heated up):Before at 47.0°C: cpu0-cpu3 (Lion Cove): OPP: 6300, Measured: 4885 (-22.5%) cpu4-cpu11 (Skymont): OPP: 6100, Measured: 4286 (-29.7%) cpu12-cpu13 (Skymont): OPP: 2500, Measured: 2461 (-1.6%)After at 59.0°C: cpu0-cpu3 (Lion Cove): OPP: 6300, Measured: 4885 (-22.5%) cpu4-cpu11 (Skymont): OPP: 6100, Measured: 4286 (-29.7%) cpu12-cpu13 (Skymont): OPP: 2500, Measured: 2457 (-1.7%)### Performance baseline * cpu0 (Lion Cove): memcpy: 22606.1 MB/s, memchr: 30673.7 MB/s, memset: 19278.7 MB/s * cpu4 (Skymont): memcpy: 12187.2 MB/s, memchr: 16975.8 MB/s, memset: 16040.6 MB/s * cpu12 (Skymont): memcpy: 8743.8 MB/s, memchr: 7533.8 MB/s, memset: 8474.1 MB/s * cpu0 (Lion Cove) 16M latency: 31.64 24.02 30.49 23.78 32.13 25.09 24.21 31.40 * cpu4 (Skymont) 16M latency: 27.76 21.27 27.25 22.08 28.08 21.08 20.72 23.85 * cpu12 (Skymont) 16M latency: 210.7 205.4 262.2 278.0 211.7 241.5 288.1 271.3 * cpu0 (Lion Cove) 128M latency: 130.5 122.8 144.9 122.0 143.5 130.4 119.6 109.3 * cpu4 (Skymont) 128M latency: 134.4 138.5 162.8 124.3 139.8 122.8 117.7 127.8 * cpu12 (Skymont) 128M latency: 264.7 287.1 301.6 352.6 286.5 230.1 290.3 275.7 * 7-zip MIPS (3 consecutive runs): 50422, 44687, 44986 (46700 avg), single-threaded: 5751 * `aes-256-cbc 1498905.78k 1680152.96k 1722302.12k 1732885.85k 1736146.94k 1734825.30k (Lion Cove)` * `aes-256-cbc 1318746.73k 1490840.92k 1540928.26k 1554062.34k 1557968.21k 1558287.70k (Skymont)` * `aes-256-cbc 691187.47k 855949.06k 866031.10k 880585.73k 894574.59k 894582.78k (Skymont)`### PCIe and storage devices: * Intel Ethernet I226-IT: Speed 5GT/s, Width x1, driver in use: igc, ASPM Disabled * 238.5GB "KINGSTON OM8PGP4256Q-A0" SSD as /dev/nvme0: Speed 16GT/s, Width x4, 0% worn out, 19 error log entries, drive temp: 52°C, ASPM Disabled"nvme error-log /dev/nvme0 ; smartctl -x /dev/nvme0" could be used to get further information about the reported issues.### Swap configuration: * /swap.img on /dev/nvme0n1p2: 4.0G (0K used)### Software versions: * Ubuntu 24.04.3 LTS (noble) * Compiler: /usr/bin/gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 / x86_64-linux-gnu * OpenSSL 3.0.13, built on 30 Jan 2024 (Library: OpenSSL 3.0.13 30 Jan 2024) ### Kernel info: * `/proc/cmdline: BOOT_IMAGE=/boot/vmlinuz-6.14.0-32-generic root=UUID=eaca6cca-80e9-4aab-9a74-10fa0e135c4a ro quiet splash vt.handoff=7` * Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl * Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization * Kernel 6.14.0-32-generic / CONFIG_HZ=1000Waiting for the device to cool down...................................... 40.0°C |
The 7-Zip benchmark’s first run achieved 50422 MIPS, and dropped to 44,687 and 44,986 for the second and third runs once the power limits kicked in. We can better understand what may have happened when monitoring the CPU frequency and temperature. We can see a clear spike in the CPU frequency up to 4.9 GHz and the CPU temperature up to 100°C when a short burst of power is needed.
But after the initial spike and longer multi-core tests, the CPU frequency stabilizes to around 3 GHz, and the temperature is at a manageable 64- 65°C.
Let’s check the power limits:
| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253 | devkit@devkit-UP-TWL01:~$ sudo powercap-info -p intel-rapl[sudo] password for devkit: enabled: 1Zone 0 name: package-0 enabled: 1 max_energy_range_uj: 262143328850 energy_uj: 47238526468 Constraint 0 name: long_term power_limit_uw: 28000000 time_window_us: 27983872 max_power_uw: 28000000 Constraint 1 name: short_term power_limit_uw: 60000000 time_window_us: 2440 max_power_uw: 0 Constraint 2 name: peak_power power_limit_uw: 120000000 max_power_uw: 0 Zone 0:0 name: core enabled: 0 max_energy_range_uj: 262143328850 energy_uj: 17543048351 Constraint 0 name: long_term power_limit_uw: 0 time_window_us: 976 Zone 0:1 name: uncore enabled: 0 max_energy_range_uj: 262143328850 energy_uj: 8287216638 Constraint 0 name: long_term power_limit_uw: 0 time_window_us: 976Zone 1 name: psys enabled: 0 max_energy_range_uj: 262143328850 energy_uj: 858917295 Constraint 0 name: long_term power_limit_uw: 0 time_window_us: 27983872 Constraint 1 name: short_term power_limit_uw: 0 time_window_us: 976 |
PL1, PL2, and PL4 power limits are set to 28W, 60W, and 120W, while Intel lists the PBP at 28W and MTP at 115W.
Contrary to what we experienced with the Intel N150 SBCs, there aren’t any options to change the power limits in the Core Ultra 5 225H SBC. It might be possible to use userspace for those who need to adjust the power limits up or down.
We can test the CPU performance by running Geekbench 6.5.0 single-core and multi-core benchmarks.
The Intel Core i5 225H processor achieved 2,837 points for the single-core test and 13,205 points for the multicore test.
While we’ll test AI workloads in more detail a little below, I also tried to run Geekbench AI to evaluate the performance for artificial intelligence tasks. But the benchmark defaults to the TensorFlow Lite framework on the CPU by default… So it’s not representative of the actual performance of the Intel SoC.
In that case, GeekBench AI reports 2,086 points for the single-precision score, 2073 points for the half-precision score, and 1534 points for the quantized score.
Let’s see if we can use the GPU or NPU (Intel AI Boost) too:
| 1234567891011 | devkit@devkit-UP-TWL01:~/GeekbenchAI-1.5.0-Linux$ ./banff –ai-listGeekbench AI 1.5.0 : https://www.geekbench.com/ai/Geekbench AI requires an active internet connection and automatically uploads benchmark results to the Geekbench Browser.Framework | Backend | Device 1 TensorFlow Lite | 1 CPU | 0 Intel(R) Core(TM) Ultra 5 225H 3 ONNX | 1 CPU | 0 Intel(R) Core(TM) Ultra 5 225H 4 OpenVINO | 1 CPU | 0 Intel(R) Core(TM) Ultra 5 225H 4 OpenVINO | 2 GPU | 1 Intel(R) Arc(TM) Graphics (iGPU) |
The NPU is not listed, but we can run the benchmark again using the OpenVINO framework on the Intel Arc 130T Graphics iGPU:
| 1234567891011121314 | devkit@devkit-UP-TWL01:~/GeekbenchAI-1.5.0-Linux$ ./banff –ai-framework OpenVino –ai-backend GPUGeekbench AI 1.5.0 : https://www.geekbench.com/ai/Geekbench AI requires an active internet connection and automatically uploads benchmark results to the Geekbench Browser.AI Information Framework OpenVINO Backend GPU Device Intel(R) Arc(TM) Graphics (iGPU)System Information Operating System Ubuntu 24.04.3 LTS Model AAEON UPX-ARL01 |
This makes a huge difference, and now GeekBench AI reports 8,585 points for the single-precision score, 17,122 points for the half-precision score, and 21,702 points for the quantized score.
Let’s now test GPU performance with Unigine Heaven Benchmark 4.0, and the Intel Core Ultra 5 225H SBC achieved a score of 1,608 points while rendering the scene at 63.8 FPS on average at the standard 1920×1080 resolution using the built-in Intel Arc 130T Graphics.
I also tested several 4K or 8K YouTube videos in Firefox and Chrome.
For the first test, I searched for an 8K 60 FPS video on YouTube and played it in Firefox. The maximum resolution was 2160p60, so no 4K options for this VP9 video. It was somewhat watchable, but I still had 827 dropped frames out of 11193. (7.4% loss)
Since I could not select 8K, I decided to try it in Chrome. But I had the same limitation. Playback felt quite smoother, and the “Stats for Nerds” overlay window confirmed it with 106 frames dropped out of 9743 (1.1% loss).
I suspected getting a video with VP9 may have been the culprit here. So I browsed YouTube to find an 8K AV1 video (it took a few tries), and I could select 4320p60 in Firefox. It reported 1,234 frames dropped out of 22,541, or a 5.4% loss, better than the VP9 video at 4K 60 FPS.
Switching to Chrome works too, and I had even fewer dropped frames (222 out of 12,426) for a loss of about 1.8% of frames.
Finally, I also used that video to test 4Kp60 AV1 playback in Chrome for a longer 40+ minutes test. Results: 517 frames dropped out of 163,523, or 0.3% loss. So we’d recommend AV1 video whenever possible, and Chrome worked better than Firefox with the videos we tried.
I ran Speedometer 2.0 in Firefox to test the web browser performance.
375 runs per minute must be the best score I’ve ever had. Speedometer 2.0 is deprecated, and I still use it for comparison with older results. Let’s also run the more recent Speedometer 3.0.
That would be 23.9 points, also a pretty high web browsing score, with the caveat thatweb browser scores tend to increase over time as the software gets optimized for speed.
UP Xtreme ARL benchmarks compared to Intel/AMD mini PCs
Since the UP Xtreme ARL is the first Intel Core Ultra (Series 2) Arrow Lake I’ve reviewed, I’ll also compare the Ubuntu benchmarks results against fairly powerful Intel and AMD mini PCs I tested in Ubuntu: the GEEKOM GT1 Mega (Intel Core Ultra 9 185H), the GEEKOM A8 (AND Ryzen 9 8945HS), and the Khadas Mind 2 AI Maker Kit (Intel Core Ultra 7 258V).
Here’s a quick summary of the specifications of all four platforms:
UP Xtreme ARLGEEKOM GT1 MegaGEEKOM A8Khadas Mind 2 AI Maker Kit SoCIntel Core Ultra 5 225HIntel Core Ultra 9 185HAMD Ryzen 9 8945HSIntel Core Ultra 7 285V CPU14-core (4P+8E+2LPE) Arrow Lake processor up to 4.9 GHz (P-Cores), up to 4.3 GHz (E-Cores), up to 2.5 GHz (LPE-Cores)16-core/22-thread (6P+8E+2LP) Meteor Lake processor up to 5.1 GHz (P-cores), up to 3.8 GHz (E-cores), up to 2.5 GHz (LP-cores) 8-core/16-thread processor up to 5.2 GHz 8-core (4P+4E) Lunar Lake processor up to 4.8 GHz (P-Cores) and 3.7 GHz (E-Cores) GPU7 Xe cores Intel Arc 130T GPU (63 TOPS)8 Xe cores Intel Arc GraphicsAMD Radeon 780M Graphics8 Xe cores Intel Arc 140V GPU NPUIntel AI Boost (13 TOPS)Intel AI Boost (34 TOPS)Ryzen AI (16 TOPS)Intel AI Boost (47 TOPS) Memory16GB LPDDR5-640032GB DDR5-560032GB DDR5-560032GB LPDDR5-8533 Storage256GB NVMe SSD2TB NVMe SSD2TB NVMe SSD1TB NVMe SSD Tested OSUbuntu 24.04.3Ubuntu 24.10Ubuntu 24.04Ubuntu 24.10
Benchmark results comparison table:
UP Xtreme ARLGEEKOM GT1 MegaGEEKOM A8Khadas Mind 2 AI Maker Kit sbc-bench.sh
- memcpy22,606.1 MB/s (P-core)21,364.6 MB/s (P-core)20,318.5 MB/s25,504.6 MB/s (P-Core)
- memset19,278.7 MB/s (P-core)36,928.3 MB/s (P-Core)62,156.7 MB/s65157.0 MB/s (P-Core)
- 7-zip (average)46,70067,96068,79019,980 31,480 (adjusted PL)
- 7-zip (top result)50,42271,62369,29722,093 32,768 (adjusted PL)
- OpenSSL AES-256 16K1,734,825.30k (P-Core)1,698,239.83k (P-Core)1,422,136.66k1,665,362.60k (P-Core) Geekbench 6 Single2,8372,6052,6612,829 Geekbench 6 Multi13,20513,72813,2757,014 9,618 (adjusted PL) Unigine Heaven score1,6081,9561,9721,057 1,698 (adjusted PL) Speedometer 2.0 (Firefox)375 278298295 Speedometer 3.0 (Firefox)23.919.519.118.4
First, there’s an outlier in the memset dataset, as the bandwidth for the UP Xtreme ARL SBC looks rather low here, while the memcpy test is in the same range as the three other systems. The Intel Core Ultra 5 225H shines in single-core benchmarks (OpenSSL, Geekbench 6.x single, and Speedometer), but struggles more with multi-core workloads. It can easily be explained since it comes with 14 cores, while the Intel Core Ultra 9 185H and AMD Ryzen 9 8945HS are 16-core or 16-thread processors with a higher PL1 value (45W vs 28W). In that regard, it’s much faster than the 30W 8-core Intel Core Ultra 7 285V found in the Khadas Mind 2 AI Maker kit. As a mid-range Core Ultra 5, the integrated GPU is also slightly weaker than all three other systems in Unigine Heaven 3D graphics benchmarks.
It would have been nice to compare the AI performance of all four systems since they all come with AI accelerators, but AI Linux benchmark tools were not quite ready when I tested the three older systems.
Features testing
I’ve also checked the key hardware features of the UP Xtreme ARL SBC:
- HDMI 1 – Video OK, Audio OK
- HDMI 2 – Video OK, Audio OK
- DisplayPort – Video OK, No audio
- Storage – NVMe SSD – OK: 3.45 GB/s sequential read speed, 1.78 GB/s sequential write speed
| 12345678 | devkit@devkit-UP-TWL01:~$ iozone -e -I -a -s 1000M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 1024000 4 412528 487824 241723 241471 79266 458082 1024000 16 968656 1132606 683398 685350 206835 1122038 1024000 512 1787241 1794192 1795590 1809276 1552343 1771997 1024000 1024 1787350 1786844 2327569 2348478 2288012 1781049 1024000 16384 1785558 1786466 3455738 3501434 3458844 1789200 |
-
Ethernet
-
LAN1 – Gigabit Ethernet (top) – OK (iperf3 DL: 937 Mbps, UL: 930 Mbps, full-duplex: 929/925 Mbps)
-
LAN2 – 2.5 Gbps Ethernet – OK (iperf3 DL: 2.35 Gbps, UL: 2.34 Gbps, full-duplex: 2.35/2.33 Gbps)
-
USB ports tested with an ORICO NVMe SSD enclosure (EXT-4 partition), a USB HDD (for USB 2.0 port), an RF dongle for a wireless keyboard/mouse combo, and the USB camera part of the kit
-
USB 3.0 combo jack
-
Top – 10 Gbps; tested up to 992 MB/s with iozone3
-
Bottom – 10 Gbps; tested up to 1,001 MB/s with iozone3
-
USB 2.0 on HDMI combo jack – 480 Mbps; tested up to about 42 MB/s with iozone3
-
RTC – OK
| 1234567891011 | devkit@devkit-UP-TWL01:$ sudo apt install util-linux-extradevkit@devkit-UP-TWL01:$ timedatectl Local time: Sun 2025-11-30 11:04:23 CET Universal time: Sun 2025-11-30 10:04:23 UTC RTC time: Sun 2025-11-30 10:04:23 Time zone: Europe/Amsterdam (CET, +0100)System clock synchronized: yes NTP service: active RTC in local TZ: nodevkit@devkit-UP-TWL01:~$ sudo hwclock -r2025-11-30 11:04:43.518430+01:00 |
- GPIO – OK – Also check the 40-pin GPIO header layout
| 123456789101112131415161718192021222324 | devkit@devkit-UP-TWL01:$ ls /dev/gpiochip*/dev/gpiochip0 /dev/gpiochip1devkit@devkit-UP-TWL01:$ sudo apt install libgpiod-dev gpioddevkit@devkit-UP-TWL01:$ sudo gpioinfo 0gpiochip0 - 451 lines: line 0: unnamed unused input active-high line 1: unnamed unused input active-high line 2: unnamed unused input active-high line 3: unnamed unused input active-high line 4: unnamed unused input active-high line 5: unnamed unused input active-high line 6: unnamed unused input active-high line 7: unnamed unused input active-high ...devkit@devkit-UP-TWL01:$ sudo gpioinfo 1gpiochip1 - 28 lines: line 0: unnamed unused input active-high line 1: unnamed unused input active-high line 2: unnamed unused input active-high line 3: unnamed unused input active-high line 4: unnamed unused input active-high line 5: unnamed unused input active-high line 6: unnamed unused input active-high line 7: unnamed unused input active-high |
Here’s a photo of a triple display setup with the DP output connected to a KTC A32Q8 32-inch 4K monitor, and the HDMI outputs to a 10.1-inch Raspberry Pi All-in-One display (left) and a CrowView 14-inch portable monitor (right).
I have five sound outputs detected, including three “HDMI / DisplayPort – Built-in Audio” devices, but I only managed to get audio through HDMI, and not the DisplayPort.
That means everything worked as expected for me, except for the DisplayPort Audio output, where the output was detected, but I didn’t receive any sound for whatever reason. Note this worked fine with the UP Squared Pro TWL using the same cable and monitor.
AI testing on the UP Xtreme ARL SBC
It’s now time to run the same AI workloads as I did on the UP TWL and UP Sqaured Pro TWL AI Dev Kits, namely Network Optix Nx Meta and the AAEON UP AI toolkit
Network Optix Nx Meta
Let’s install the Nx AI Certification Test:
| 123456789101112 | sudo apt dist-upgradesudo apt install python3-pip python3-venvmkdir nxai_testcd nxai_testwget https://artifactory.nxvms.dev/artifactory/nxai_open/NXAITest/nxai_test.tgztar -xvf nxai_test.tgzpython3 -m venv ./ source ./bin/activate # activate python venvpip3 install -r requirements.txt./Utilities/install_nxai_manager.shpython3 Utilities/install_acceleration_library.pypython3 Utilities/download_models.py |
It’s the same process as the other boards, except I selected OpenVino:
| 12345678 | (nxai_test) devkit@devkit-UP-TWL01:~/nxai_test$ python3 Utilities/install_acceleration_library.py Detecting compatible acceleration hardware...ls: cannot access ‘/dev/memx*’: No such file or directoryls: cannot access ‘/dev/dxrt*’: No such file or directorySystem detected more than one compatible acceleration runtime for your device. Please choose one to install:1 : Nx CPU2 : OpenVinoEnter number 1 - 2:2 |
Once the installation is complete, we can run the script to test everything:
| 1 | python3 all_suites.py |
46 benchmarks completed, like on the UP TWL using the CPU/GPU:
| 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950 | ##################################################All model benchmarks completed.Benchmark results: Model-Yolov8s-[640x640]: 15.23 FPS Model-ViT-Tiny: 239.85 FPS Model-Yolov4-[1280x1280]: 6.60 FPS Model-Yolov9-e-[640x640]: 4.03 FPS Model-Emotion-Recognizer: 1397.87 FPS Model-Yolov9-c-[640x640]: 2.52 FPS Postprocessor-Illegal-Dumping: 25.35 FPS Model-Yolov7-Tiny-[1280x1280]: 6.93 FPS Pipeline-Feature-Extraction: 508.23 FPS Model-Face-Locator: 437.94 FPS 80-classes-object-detector[640x640]: 26.72 FPS Model-Yolov4-[320x320]: 98.54 FPS Model-Resnet-50: 54.58 FPS 80-classes-object-detector[320x320]: 96.56 FPS Model-Yolo5su-[640x640]: 16.29 FPS Quantized-INT8: 52.06 FPS Model-Resnet-18: 115.20 FPS Model-Regnet-Y: 415.34 FPS Model-Yolov7x-[1280x1280]: 0.63 FPS Model-Yolov8l-[640x640]: 2.77 FPS Model-Yolo5su-[1280x1280]: 4.12 FPS Model-Yolov4-[128x128]: 436.94 FPS Model-Yolov7-Tiny-[640x640]: 28.09 FPS Quantized-FP32: 58.32 FPS Model-Yolo5su-[256x256]: 87.07 FPS Model-Densenet: 65.93 FPS Pipeline-Direct: 87.37 FPS Model-Yolov9-m-converted-[640x640]: 5.53 FPS Model-Clip: 38.54 FPS Model-Yolov4-[640x640]: 26.15 FPS Multi-Model: 184.84 FPS Quantized-FP16: 58.69 FPS Model-Mobilenet-V3: 320.15 FPS Empty-Small: 1958.36 FPS Model-Yolov7-[640x640]: 4.05 FPS Pipeline-Conditional: 558.34 FPS Model-Yolov9-[640x640]: 4.11 FPS Model-Yolov9-converted-[640x640]: 4.08 FPS Model-Yolov7x-[640x640]: 2.39 FPS Model-Yolov9-m-[640x640]: 4.69 FPS Empty-Large: 218.12 FPS Model-PPE: 64.63 FPS postprocessor-python-example: 94.69 FPS postprocessor-c-example: 102.14 FPS postprocessor-python-image-example: 97.14 FPS postprocessor-c-image-example: 99.24 FPS################################################### |
We also had two stability tests that passed successfully:
| 1234 | —————————————————––Tests passed: 6 / 6—————————————————–– |
You can check the full log for reference.
Let’s compare the UP Xtreme ARL results with a subset of the results we got on the UP TWL (Intel N150) and UP Squared Pro TWL (Intel N150 + Hailo-8L).
UP TWL
UP Squared Pro TWL
UP Ztreme ARL
Nx CPU
OpenVino
Hailo-8L
OpenVino
80-classes-object-detector[640x640]
3.91 FPS
3.73 FPS
38.04 FPS
26.72 FPS
80-classes-object-detector[320x320]
15.24 FPS
14.73 FPS
90.31 FPS
96.56 FPS
postprocessor-python-example
14.98 FPS
14.73 FPS
88.40 FPS
94.69 FPS
Postprocessor-python-image-example
15.73 FPS
15.17 FPS
90.55 FPS
97.14 FPS
postprocessor-c-image-example
14.77 FPS
14.75 FPS
83.18 FPS
99.24 FPS
postprocessor-c-example
14.75 FPS
14.76 FPS
89.15 FPS
102.14 FPS
Model-Yolov9-e-[640x640]
0.60 FPS
0.59 FPS
Failed
4.03 FPS
Model-Yolov9-e-converted-[640x640]
0.32 FPS
0.31 FPS
Failed
Failed
Model-Yolov4-[320x320]
15.25 FPS
14.68 FPS
Failed
98.54 FPS
Model-Mobilenet-V3
48.45 FPS
56.08 FPS
Failed
320.15 FPS
Total Benchmarks
46
46
6
46
Stability Tests
6/6
6/6
3/3
6/6
As expected, the UP Xtreme ARL delivers the highest performance in all tests, except the 80-classes-object-detector[640×640], where it performed object detection at 26.72 FPS and 38.04 FPS. I was surprised that one test in the list above failed on the UP Xtreme ARL (OpenVino) but not on the UP Squared Pro TWL (OpenVino):
| 12345678910 | —————————————————Running test: Model-Yolov9-e-converted-[640x640]Loading test settings...Creating Unix socket server...Starting Edge AI ManagerError! Failed to communicate with inference engine 0ERROR: Runtime failed to load model /home/devkit/nxai_test/Utilities/../Models/c2588352-3395-46e3-bc9d-8fa920145c82.onnx.Error! AI Manager exited prematurely! Code: 1————————————————— |
It looks like all OpenVino/Nx CPU tests fail a few benchmarks with a similar error, but still end up with 46 passed tests. So overall, the UP Xtreme ARL delivers the best performance and offers optimal compatibility compared to other platforms, but it’s obviously more expensive. The UP Squared Pro TWL with Hailo-8L offers the best price/performance ratio, but requires a bit more work as Network Optix didn’t implement most tests for the Hailo-8L accelerator. The UP TWL AI Dev Kit does not come with any AI accelerator at all, and everything runs on the CPU/GPU, so it will only be suitable for lightweight AI workloads or when a high inference rate is not needed.
AAEON UP AI toolkit demos
We are not done yet, as I still have to run the UP AI toolkit examples available on GitHub.
I followed the same steps to install and launch the AAEON UP AI toolkit as I did for the other boards:
| 1234567 | git clone https://github.com/up-division/up-ai/cd up-aichmod +x prepare.sh start_app.sh./prepare.shsudo rebootcd ~/up-ai ./start_app.sh |
Like in the previous reviews, I had an HTTP error once while running the prepare.sh script, but no issue with storage capacity since the 256GB is plenty enough. The script will launch the UP Edge AI Sizing Tool in Firefox.
The Intel Arc Graphics (GPU) is detected, but I don’t see any NPU for the Intel AI Boost. Nevertheless, I could add Computer Vision demos (Object Detection) using the USB camera on the CPU or the GPU.
I got 13.73 FPS inference on the CPU…
… and 29.99 FPS on the GPU, which did the job fine.
Since the UP Xtreme ARL is a more powerful platform and ships with a 256 GB NVMe SSD, it’s easier to install more models without quickly running out of space, as I experienced on the 64GB eMMC flash found on UP TWL or UP Squared Pro TWL (unless we install an SSD on the latter).
Next up was a text generation model (Mistral-7B-Instruct) running on the GPU. I asked it about OpenVino, and it replied in 12.91s at 8.03 tokens/s.
I also added an “automatic speech recognition demo” running on the GPU. There’s a default audio file saying “How are you doing today?” and the transcription was completed in 0.40 seconds.
I decided to load another audio file – a short ~30s excerpt from a financial interview – and the transcription was done accurately in 1.70 seconds.
There’s also a Translate option. I found a 32-minute 54-second interview about philosophy in French, and asked the ASR demo to transcribe and translate it into English.
It did so in 86.50s. Not all sentences make perfect sense, but in most parts, it’s close to the original meaning. As somebody who did some video/audio transcription and translation work in the past, the speed is scary. Transcribing one hour of audio typically takes 3 to 5 hours for a human, and translators can usually handle around 3,000 words per day. A thirty-second clip with an estimated 4,000 words would take 10 to 12 hours of work. That’s 400 to 500 times faster than a human. Human editing is likely taking longer on the AI-transcribed version, but is needed in either case.
Since last time around, I had trouble with the 64GB eMMC flash’s limited capacity, so I checked the space used on the SSD: 90GB. This includes the OS itself, benchmarking tools, AI tools, and models.
| 123456789 | devkit@devkit-UP-TWL01:~$ df -hFilesystem Size Used Avail Use% Mounted ontmpfs 1.6G 2.4M 1.6G 1% /run/dev/nvme0n1p2 234G 90G 133G 41% /tmpfs 7.6G 0 7.6G 0% /dev/shmtmpfs 5.0M 16K 5.0M 1% /run/lockefivarfs 192K 126K 62K 67% /sys/firmware/efi/efivars/dev/nvme0n1p1 1.1G 6.2M 1.1G 1% /boot/efitmpfs 1.6G 112K 1.6G 1% /run/user/1000 |
Power Consumption
I also measured the power consumption of the Arrow Lake AI development kit using a wall power meter:
-
Power off – 2.7 – 2.8 Watts
-
Idle – 15.4 – 16.8 Watts (fan active at all times)
-
Stress test (stress -c 14)
-
First ~30 seconds – 62.5- 64.0 Watts
-
Longer runs – 41.7 – 44.5 Watts
-
Object detection – Camera + GPU – 38.4 – 40.1 Watts
The board was connected to an HDMI monitor, 2.5GbE, a USB RF dongle for a wireless keyboard/mouse combo, and a USB camera. Idle power consumption looks to be on the high side; even if I disconnect all peripherals, leaving only the power cable, it drops to 13.2 – 14.4 Watts.
Conclusion
The AAEON UP Xtreme ARL AI Dev Kit features a powerful Arrow Lake SBC powered by an Intel Core 5 Ultra 225H SoC with a 69 TOPS Intel Arc 130T graphics delivering 69 TOPS and a 13 TOPS Intel AI Boost for a combined 83 TOPS of AI performance when also taking the CPU into account.
Most standard features I tested worked fine, and it’s a powerful and versatile AI platform with most AI tests running as expected. The only real issues I encountered are that DisplayPort audio did not work with my monitor (HDMI OK), and it looks like the Intel AI Boost is not used in the tools, or at least not explicitly.
Now that I have tested all three platforms from the UP AI Dev Kit ecosystem for close to 50 hours, I can leave final comments on all three:
- UP TWL AI Dev Kit – It’s a low-power, entry-level Intel N150 platform with no AI accelerator. Most tests will work, but the CPU-only processing will limit practical applications. I found the 64GB eMMC flash to fill up quickly. Price: $279 with power supply and camera.
- UP Squared Pro AI Dev Kit – Also based on Intel N150, but it ships with a 13 TOPS Hailo-8L M.2 AI accelerator that is almost as powerful as the UP Xtreme ARL kit. But fewer tests have been implemented in Nx Meta, and I found the SDK to be a pain to work with, having experienced frequent issues with version mismatches between Python, the Hailo drivers, and Hailo libraries. It suffers from the same 64GB eMMC flash limitation as the UP TWL AI Dev Kit, but an M.2 socket allows the user to easily add NVMe storage if needed, although the heatsink must be removed from the Hailo-8L module for installation. It offers a good price/performance ratio, but most work may be needed on the Intel-only kits. Price: $469.00 with power supply, USB camera, and Hailo-8L AI accelerator.
- UP Xtreme ARL Ai Dev Kit – It’s clearly the most powerful platform of the three, thanks to the Intel Core Ultra 5 225H SoC. Everything worked out of the box without headaches (like I had with the UP Squared Pro variant), and it can run more AI workloads. It’s the devkit of choice if the price is not an issue. Price: $899.00 with power supply and USB camera
I’d like to thank AAEON for sending the three Intel AI Dev Kits for review. You find all three on for purchased on the UP Shop

Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress. We also use affiliate links in articles to earn commissions if you make a purchase after clicking on those links.