We have a big mural on a big wall. It is requested that, when viewing this mural on your handheld device, like a smartphone's camera, image overlays should be placed at specific positions within that mural (that mural has left out parts and the respective cutouts should be displayed on top).
Now, I followed the ar.js tutorial on image tracking and it kind of works, but I have the feeling that this is almost solely designed for horizontal and small placements. Like putting a car on your desk. The objects I managed to place on top of the mural are impossible to position, even when you add an orientation changer or rotate the objects.
This is what I have so far, tested with different sizes, rotations, positions:
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<title></title>
</head>
<body style="margin : 0px; overflow: hidden;">
<a-scene
vr-mode-ui="enabled: false;"
renderer="logarithmicDepthBuffer: true;"
embedded
arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
>
<a-nft
type="nft"
url="url"
smooth="true"
smoothCount="10"
smoothTolerance=".01"
smoothThreshold="5"
size="1,2"
>
<a-plane color="#FFF" height="10" width="10" rotation="45 45 45" position="0 0 0"></a-plane>
</a-nft>
<a-entity camera></a-entity>
</a-scene>
</body>
</html>
It would be interesting to know how the sizing and widths and heights really function alltogether (for instance, in the documentation it say size is the nft size in meters, but is that really important? What about the children then?)
So I wondered, do I even need AR? Actually, it would be enough to detect image A in that mural (i. e. camera stream) and place another image B on top of that (or replace it), respecting the perspective.
