How to use pvserver in Slurm with singularity?

Hello
I’ve got some issues when running pvserver on HPC(Slurm) by singularity. Looking for some help sincerely.

Version
ParaView-5.11.0-MPI-Linux-Python3.9-x86_64.tar.gz (download from this link)

Run Env
HPC Cluster managed by Slurm (CentOS 7) without any mpi

Question
I packet paraview to singularity SIF.

Dockerfile like

FROM centos:7

COPY paraview /usr/local/paraview

RUN yum install -y libXcursor mesa-libGL libgomp

run docker build to image

docker build . -t paraview-dev:test01

Then turn docker image to singularity sif

singularity build paraview.sif paraview.def

paraview.def

# paraview-5-11.def
BootStrap: docker
From: paraview-dev:test01

%labels
    Version ParaView 5.11

%setup

%files

%environment


%runscript

exec $@

%post

sbatch script1

#!/bin/bash

#SBATCH -N 2

singularity run -e /data/wanghao/paraview.sif /usr/local/paraview/bin/mpiexec --hosts c1,c2 -np 2 /usr/local/paraview/bin/pvserver

sbatch script1 output calls error like below

bash: /usr/local/paraview/bin/hydra_pmi_proxy: No such file or directory

maybe I should not singularity run mpiexec -n 2 pvserver? Then I try to edit sbatch script2 like mpiexec singularity -n 2 pvserver
sbatch script2

#!/bin/bash

#SBATCH -N 2

# /data/wanghao/paraview/bin/mpiexec is the same with mpiexec in singularity sif file (/usr/local/paraview/bin/mpiexec)
/data/wanghao/paraview/bin/mpiexec --hosts c1,c2 -np 2 singularity run -e /data/wanghao/paraview.sif /usr/local/paraview/bin/pvserver

The Slurm output is

[root@c1 data]# cat slurm-37.out 
Waiting for client...
Connection URL: cs://c1:11111
Accepting connection(s): c1:11111
Waiting for client...
Connection URL: cs://c2:11111
Accepting connection(s): c2:11111

It seems launch two rank 0 mpi process by pvserver?

What I want
I want run pvserver in singularity and run it by mpi and slurm to do a multiple node job.
Hope to get a reply, thank you again!