SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Qin Z, Yin M, Lin Z, Yang F, Zhong C. Vis. Comput. 2021; 37(8): 2195-2205.

Copyright

(Copyright © 2021, Computer Graphics Society, Publisher Holtzbrinck Springer Nature Publishing Group)

DOI

10.1007/s00371-020-01979-2

PMID

unavailable

Abstract

The multi-view of an object can be used for 3D reconstruction. The method proposed in this paper generates the left and the top view of a target car through deep learning. The input of the method is only a front view of a 3D car and it isn't necessary for the depth of the 3D car. Firstly, a rough orthographic views of the 3D car is gotten from an information constraint network which is constructed by considering the structural relation between one view and the other two views. And then the rough orthographic views is transformed into large-pixel block rough view through the nearest interpolation, at the same time, the large-pixel blocks are also migrated to improve the quality of the rough orthographic views. Finally, the generative adversarial network with perception loss is used to enhance the large-pixel block view. In addition, the three views generated by the network can be used to synthesize a 3D point cloud shell.


Language: en

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print