Search papers, labs, and topics across Lattice.
Fudan University, Fusion 0.841 0.959 0.989 0.918 0.878 0.970 0.926 89.523 57.843 85.234 33.852 65.076 94.965 60.829 TarDAL 0.896 0.976 0.993 0.951 0.913 0.983 0.952 87.462 52.695 77.013 31.397 53.117 90.756 56.194 CDDFuse 0.883 0.974 0.991 0.943 0.903 0.976 0.945 89.811 57.901 87.383 32.988 67.748 96.055 61.649 LRRNet 0.898 0.978 0.993 0.958 0.904 0.986 0.953 89.705 59.081 86.685 34.596 66.974 95.678 62.942 IVFWSR 0.876 0.974 0.991 0.948 0.927 0.984 0.952 88.673 59.003 85.711 34.307 63.281 95.324 60.801 CoCoNet 0.893 0.974 0.991 0.947 0.899 0.982 0.948 89.630 57.803 86.597 32.492 64.982 95.653 58.513 EMMA 0.898 0.978 0.994 0.956 0.909 0.984 0.953 90.132 55.686 87.584 33.191 68.315 96.055 61.952 TIMFusion 0.896 0.977 0.993 0.958 0.913 0.984 0.954 89.911 56.663 87.015 32.953 66.018 95.874 62.836 DCEvo 0.885 0.975 0.989 0.946 0.907 0.977 0.946 89.372 56.562 87.231 33.876 68.513 96.075 60.366 SAGE 0.900 0.977 0.993 0.960 0.924 0.985 0.956 88.189 55.754 86.682 33.715 66.070 95.774 60.777 Ours 0.902 0.972 0.994 0.967 0.919 0.990 0.948 89.549 65.923 87.178 36.346 67.578 95.967 62.939 4.3 Performance on Downstream Tasks To further evaluate the effect of different methods on downstream tasks, we conduct object detection and semantic segmentation experiments on the M3\text{M}^{3}FD and FMB datasets, respectively. The results are shown in Table 2 and Figure 5. For object detection, the fused images are used to train and test the YOLOv5s111https://github.com/ultralytics/yolov5, Corresponding authors
1
32
3
1
The largest open-source image generative model to date, HunyuanImage 3.0, achieves state-of-the-art performance using a Mixture-of-Experts architecture and native Chain-of-Thoughts schema.