Hello Forum,
I have spent some time porting the Hello Chirp application example to run on an embedded Linux board. The approach I have taken was to write an interface layer in the functions that would be typically writing to the bare metal registers. It is all running well in so much as I can bring up the sensor (write idle and short range firmware), configure the sensors and then enter the infinite loop. I am using the hardware triggered mode where I raise the INT line for a few tens of us and then read the ToF register. I am using as much of the Hello Chirp/Sonic lib as possible and only touched the lowest IO layers. I have taken extensive I2C traces using a Tektronix DPO 3014 I2C Protocol analyzer and a Saleae Pro16. I am quite confident about the communications to the CH-101.
I guess my question is, has anyone else run the CH-101 in hardware-triggered mode and did they hit any problems?
Thank you for any help that you can offer.
Hello,
In my experience, this forum doesn't seem to get too much support.
In your post you don't seem to mention any issues you encounter. Is there a specific problem you are having? Does your embedded Linux have an Atmel SAM processor?
I have ported this program to a board with the Nordic nRF52480 processor, however, I'm using timed interrupts not external hardware ones. Perhaps I could still help though.
Hi Rachel,
This is a quiet forum.
My approach to porting the reference driver (Hello Chirp) has been to take the application and library code, remove the lower-level functions that accessed the registers and wrote my own equivalents so that I can achieve the same functionality at the pin-level as the original bare metal driver.
Three parts are really required:
1) Control of GPIO, both as inputs and outputs;
2) An abstraction to the I2C bus interface;
3) Accurate timing for the RTC calibration.
Once I had resolved the missing symbols at link-time, I could then bring up the real hardware. I did this in two phases,
first run the application on a Linux desktop (but with stubbed access to the H/W, just printfs) as it is a much faster and efficient way of working and tested on an embedded Linux target with access to the H/W.
I monitored the I2C traffic at each stage of the CH-101's initialisation, the data going to the device and the response where appropriate,
I could see the I2C programming address (0x45) been switched to the application address (0x2D) at the appropriate point, the CH-101 responded
so I believe that the loaded firmware was running. I have built the image to use the short-range firmware.
Once the CH-101 has been loaded and configured for a H/W triggered TX, I assert the INT line to trigger a range measurement.
I then switch the INT line (from the perspective of the host) to an input and wait for the CH-101 to assert the INT line (which indicates that the range sampling is complete). Unfortunately, as soon as I change the INT line from being an output, to an input, it is immediately asserted. I would expect a delay of ~6 to 8ms due to the laws of physics. It asserts after 116us which is the time that the OS takes to switch that IO line from being an O/P to an I/P.
This is clearly wrong, but I am struggling to get any useful support from Chirp/TDK/InvSense.
On a typical peripheral that has raised an interrupt, there is either a specific register that can be written to which clears the interrupt or
reading a register (e.g. the ToF register) may clear the interrupt.
For some reason, Chirp are not willing to share the details needed to solve this kind of problem. I can understand the need to protect their IP, but it makes it very difficult to use the part outside of the reference design.
Thank you for any help you can provide.
Hi Scott,
Have you tried to insert a delay function between asserting your interrupt as input and output? The original Chirp code has a 5 microsecond delay where it keeps the interrupt output high before switching to an input. The minimum need to be 800us.
Regards
Rachel