The decision and planning system for autonomous driving in urban environments is hard to design. Most current methods manually design the driving policy, which can be expensive to develop and maintain at scale. Instead, with imitation learning we only need to collect data and the computer will learn and improve the driving policy automatically. However, existing imitation learning methods for autonomous driving are hardly performing well for complex urban scenarios. Moreover, the safety is not guaranteed when we use a deep neural network policy. In this paper, we proposed a framework to learn the driving policy in urban scenarios efficiently given offline connected driving data, with a safety controller incorporated to guarantee safety at test time. The experiments show that our method can achieve high performance in realistic simulations of urban driving scenarios.