API vs Browser UI Theory


  • I am starting development using the API and the approach I am using is to perform an action using the Browser UI and seeing if I can replicate the results using the API.   As I peel back the onion,  I am finding questions that I have not found directly in the documentation.  

    The first question is in regard to Camera Settings.

    In the Browser UI,  there are 2 sliders for the Exposure and Gain

    Exposure range is 0 to 1000,  Gain range is 0 to 1000

    Using the API,  there are settings for the AnalogGain, DigitalGain and Exposure.  The documentation implies the following ranges:

    AnalogGain 256-1024,   DigitalGain 256-65535, Exposure 9000-90000

    If I use the Browser UI with settings of both the Gain and Exposure being 500,  what would the equivalent settings be using the API for the AnalogGain, DigitalGain and Exposure?

    Thank you,

    Rob

     

     



  • Hi Rob,
    They are proportionally mapped. The API gives raw values that come from the hardware driver on the camera itself, which we decided wasn't a friendly scale, so we mapped them to more resonable values.

    For exposure 
    exposureAPI = exposureSlider * 100
    For gain
    analogGainAPI = Math.round(analogGainSlider * 0.768) + 256;

    If I may offer another point here, the exposure values in the API are not quantized lik they are in the front end. They are stepped to values of 90 (90, 180, 270 ... 990). This is because there is a harmonic interference between the frequency of the cameras and the frequency of the projector that cancels itself out at those values. If you play around with different values and you will see banding occur.  
    steps of 90 offer (in most cases) enough control over the exposure. If you need a value between quantized steps, then I suggest you use the lower exposure, and adjust with the gain setting.

    Hope this helps


  • Thanks Drew!  Very helpful.

    One followup.

    Should digitalGain be a constant default value if using analogGain or should it mirror the analogGain settings?


  • @Rob M, they are separate and you can leave Digital gain at it's minimal value. Analog Gain will give better results over Digital gain if needed.


  • Happy New Year.

    I have been slowly moving along with the API and I have a question about Descriptors/Settings/Camera

    I have implemented SetCameras and I have been able to independently set AnalogGain, DigitalGain and Exposure for each camera.

    I am using Tasks/ListSettings at the start of the program to fetch the scanner state and initialize my UI.  However Descriptors/Settings/Camera does not pass back the independent camera settings for AnalogGain, DigitalGain and Exposure but instead passes back 1 value only.

    Is this by design or is the implementation of Descriptors/Settings/Camera incomplete?

    Thanks!

    Rob


  • Hey Rob,
    This is a bug... sorta.... Good catch. The scanner performs better if the cameras are set to the same values and so originally, getting/setting the cameras was kept the same for both cameras. This later got opened up to allow for different settings but obiouslt the getter never got updated.
    Good catch.

    I'll add this to the bug list. 


  • Sounds good,  I was most interested in testing the Three to verify independent control,  but as of right now I don't have a pressing reason to implement it.  I will just set both cameras the same, especially if the system works best that way.

    I do have a question on SetProjector.   After power up,  the IP address pattern is sticky and if I go straight to StartVideo before opening a project and running a task which changes the pattern, the pattern still exists.   I am thinking I have to send a new image to the projector.    Is there a suggested image to send to the projector (all black or white?) and are their suggested settings for Projector Image settings like width and height.   Thanks!


  • I did solve my issue with the the IP address image using SetProjector.  I was neglecting to send an information in the 'color' field.    Once I did so,  I was able to have a solid projector.   So it appears that sending the 'color' information without any pattern or image information will clear any patterns or images.  Thanks!

     


  • Time for me to demonstrate my newbie-ness to 3D data formats.  I am good with the export functions as the data is presented in one of many defined data formats and I can chase those down.

    I would like to however understand the data formats of the "ScanData" task.

    There are 7 buffers of information that can be returned with the command which are:

    Position 0 Vertex position.
    Normal 1 Vertex normal.
    Color 2 Vertex color.
    UV 3 Vertex UVs
    Quality 4 Vertex quality.
    Triangle 5 Triangle index.
    Texture 6 Texture.

    From my initial cursory investigation,  these seem to be closest to information stored in an OBJ format.

    Is there a resourse that I could be pointed that would provide some information to the data formats of the 7 buffers above?

    Thanks!

    Rob


  • Hi Rob,

    Those are all values that describe the points and how they inter-relate, and can be arranged in different ways depending on the particular file format you want to use. For example, and .xyz file would have

    x,y,z,nx,ny,nz  (xyz coordinate points, and the xyz normals) on each line, and each line would describe one point. 

    If you're looking for more information on this, check out the documentation for any 3D file format (obj, stl, xyz, ply) and you'll see how they're inter-related.

     

     


  • Thank you for your input Trevor,

    I am getting closer to understanding some basic knowledge of the data formats and I now understand most of the data formats for the buffers returned by the ScanData task.  I am going to document my understanding here so let me know where I may be mistaken.  There is 1 piece I still don't understand so I have a follow-up question at the end.

    For buffer 'Position':  This returns a byte array which should represent an array of 'XYZ' values where each of X,Y,Z are floats (little endian).

    For buffer 'Normal':  This returns a byte array which should represent an array of 'XYZ' values where each of X,Y,Z are floats (little endian).

    For buffer 'UV':  This returns a byte array which should represent an array of 'UV' values where each of U,V are floats (little endian).

    For buffer 'Triangle':  This returns a byte array which should represent an array of 'XYZ' values where each of X,Y,Z are Int32 (little endian).

    And now where I am missing something:

    For buffer 'Texture':  This returns a byte array which I believe represents a 2D image.  However the only metadata returned (that I can find) is a stride of 0.   In the ScanData I requested, the databuffer had a size of 757467.   How can I decode this data buffer to make sense of the data?

     


  • Hi Rob,
    Position, Normal and UV are correct

    Triangle is the index of the point for each triangle. It's an array that represent Point1 Point2 and Point3 in a triangle. That index refers back to the position in the "Position", "Normal" and "UV" arrays.

    Example

    Position [x1, y1, z1, x2, y2, z2 . . .]
    Normal [nx1, ny1, nz1, nx2, ny2, nz2 . . . ]
    UV [U1, V1, U2, V2 . . . ]

    Triangle [ 0, 1, 2, 2, 1, 3 ... ]

    In this example the first 3 numbers represent the indices for first triangle to be drawn [0,1,2]. 
    You then use these indices to get back the positions and normals and UV's

    For example to get the first position of this triangle (9, 51,26) we would want the 9th coordinate in the position array
    XYZ(Position[9*3+0], Position[9*3+1], Position[9*3+2])
    Notice I am multiplying the index by three because each coordinate is 3 numbers. Then the X has an offset of 0, Y has an offset of 1, and Z has an offset of 2.


    For the Texture buffer, unfortunately it's not well documented and differs depending on WHERE you get the image data from

    • For merged outputs it will typically be PNG
    • For CaptureImage it depends on what you set (jpeg, png, raw or bmp)
    • For scans its typically jpeg

    Heres the python example for capturing an image https://github.com/Matter-and-Form/three-python-library/blob/7b5743c10153bb15fecc8220c7f5173512c2411c/three/examples/captureImage.py

    For those image types, I strongly suggest you use a library instead of parsing the data yourself, unless you really want to learn the specifications.


  • Awesome Drew, that is exactly the information I was looking for.  I wasn't thinking jpeg, though in hindsight it should of been obvious that it was a potential format with the buffer size being smaller than I was expecting. 

    I did just confirm using OpenCV that the data represents a jpeg image. 

    Thank you very much!


  • Have been slowly working through more Three features and just worked through the Turntable Calibration API section, was able to figure most things out without a need to post.

    However, one small thing I have not been able to find.   The calibration routines return the time and date and I would like to update them through the API.   Reading the the docs,  I have not found the way to do this.   My system clock is about 6 months behind.

    I have only used the unit on a local cable to my laptop.   I am thinking I may need to connect to wifi to do this,  but I thought I would ask the question first as I would think it may be available through the API.

    Thanks!

    Rob

     


Please login to reply this topic!