## 这是一篇来自英国的关于虚拟和增强现实的计算机代写

**Important, please read first! **

- The assignment should be submitted via Blackboard; the deadline is May 4
*th*, 2pm. All deadlines can be found in SharePoint.

- Your software will be tested and should work with Python 3.11.1. You can use additional Python libraries to achieve any additional functionality (e.g., NumPy). However, the methods asked in problems 1 – 5 should be implemented by you, as evidenced in the source code that you will submit, not via existing libraries that implement these algorithms.

- Please submit a compressed archive (.zip) with: (a) all your source code, original/modified engine source code files and 3D model. I should be able to run your software just by running python3 render.py on terminal and see the requested sequence on screen. Include a readme file with instructions on how to run your program and what external resources you require if this is not obvious. (b) A .pdf report no longer than 6 pages including images (max
*∼*2500 words). At the top of the first page of the document identify yourself using your CIS username. (c) A short video demonstrating the full sequence (compressed to*<*100MB, you can use any online video compression platform to compress it). The video should show the movement in real time as indicated by the tracking dataset, i.e., should last*∼*27 seconds. When running your software though, it is ok if the rendering takes more time – software renderers are slow.

- The marks available for correct implementations/answers to each question are indicated. Partial credit will be given for good attempts.

- The level of achievement (good/very good/excellent/etc.) for each marking criterion is determined based on the marking and classification conventions published in the university core regulations (pp 15-16): link.

- The Virtual & Augmented Reality module is only assessed by this coursework (100% of the module mark).

- A FAQ section in Blackboard will be updated with questions as they arise.

You work at a VR company, and you currently participate in the development of a prototype 3D graphics engine capable to render content for VR headsets. You have been tasked to work on the 3D engine to develop basic rendering, tracking, physics, and distortion pre-correction for VR. Your are provided with a rudimentary 3D engine [2] (available in Blackboard), capable of rendering simple 3D objects using orthographic projection only (i.e., no perspective projection, depths are projected on the screen without taking their depth -distance from camera- into account instead). The engine can perform simple shading too, based on vertex colour interpolation. The engine outputs a single framebuffer on the disk. Your assignment is to extend the 3D engine, to handle perspective projection & object transformations, tracking, physics and distortion correction.

Your task list is below. You should demonstrate your development in a demo scene by rendering a few VR headsets (3D model is provided) falling from the sky under the effect of gravity while the camera is yawing,pitching and rolling based on the provided (in Blackboard) real headset tracking dataset.

**Before proceeding**, please read LaValle’s relevant chapters in the free book available in Blackboard, attend all relevant lectures and read [4] carefully (Please note the typo in the order of operations in eq. 8 & 10).

Additional helpful but not necessary resources can be found in Blackboard ( [1, 3])

PROBLEM 1, RENDERING – 15 MARKS:

The provided engine currently only produces static renders. Add the following features to the rendering engine by updating its source code as required:

- Enable real time output of the frame buffer on the screen, instead of output to the disk. You could use PIL / Matplotlib /OpenCV / etc. to show the buffer on the screen. (1 mark)

- Implement perspective projection instead of orthographic projection. You will need to extend some elements of the engine to handle homogeneous coordinates. (6 marks)

- Implement the basic transformation matrices, in particular add the ability to translate, rotate, and scale objects. You should use these in the demo scene, e.g., rotating the headsets as they fall from the sky. (8 marks)

PROBLEM 2, TRACKING: HANDLING POSITIONAL DATA – 15 MARKS:

In the Blackboard coursework folder, you can find a sample dataset (6959 records) acquired from a VR headset’s IMU. The headset was sequentially rotated from 0 degrees, to +90 degrees and then to -90 degrees around the X, Y and Z axes (Z is up due to the way the IMU was soldered to that particular headset). IMU observations were recorded at a rate of 256Hz: Time in seconds, Tri-axial velocity (rotational rate) in deg/s, tri-axial acceleration in g (*m*/*s *2 ), and tri-axial magnetometer flux readings in Gauss (G) – the flux readings will not be used in this coursework.

- Read and import the provided (.csv) dataset (6 marks).
- Convert rotational rate to radians/sec and normalize the magnitude of both the accelerometer and magnetometer values taking special care of NaN divisions (5 marks).

- Implement
*your*methods to (i) convert Euler angle readings (radians) to quaternions (1 mark),(ii) calculate Euler angles from a quaternion (1 mark), (iii) convert a quaternion to its conjugate (inverse rotation) (1 mark), and (iv) calculate the quaternion product of quaternion a and b (1 mark).

PROBLEM 3, TRACKING: CAMERA POSE CALCULATION – 20 MARKS:

- Implement a dead reckoning filter (using the gyroscope-measured rotational rate) with gravitybased tilt correction (using the accelerometer data). First calculate current position by using a previously determined position (starting at the identity quaternion) and re-evaluate that position based on an estimated speed over the elapsed time (5 marks). Consider the initial orientation q[0] to be the identity quaternion [1,0,0,0].

- Then you will include accelerometer information in the computation: Transform acceleration measurements into the global frame (2 marks). Calculate the tilt axis (2 marks) and find the angle
*ϕ*between the*up*vector and the vector obtained from the accelerometer (2 marks). Use the complementary filter to fuse the gyroscope estimation and the accelerometer estimation (4 marks). Upon firing up the engine, the virtual camera should rotate based on the fused input data (gyroscope & accelerometer).

- Try a few different alpha values (e.g., 0.01, 0.1, …), investigate and comment on their effect on drift compensation in your report. Implement any other processing of the accelerometer values that you consider important / necessary and discuss this in the report. (5 marks)

PROBLEM 5, PHYSICS – 20 MARKS:

Implement simple Physics in the engine, simulating gravity acceleration and air resistance applied on the falling objects. See additional literature in the Blackboard coursework folder for formulas on calculating air resistance. Choose arbitrary values for the *drag coefficient *(e.g., 0.5), *air density *(e.g., 1.3*kg*/*m*3 ) and *area *(e.g., 0.2*m*2 ). (10 marks)

Implement simple distance-based collision detection between the objects. Use spheres of an appropriate radius as bounding regions to perform the calculation. For your demo scene, arrange the objects in such a way so that a few collide and change direction. (10 marks)

**程序辅导定制C/C++/JAVA/安卓/PYTHON/留学生/PHP/APP开发/MATLAB**

本网站支持 Alipay WeChatPay PayPal等支付方式

**E-mail:** vipdue@outlook.com **微信号:**vipnxx

如果您使用手机请先保存二维码，微信识别。如果用电脑，直接掏出手机果断扫描。