Recent research finds that even carefully aligned large language models (LLMs) can be manipulated with malicious intent, leading to unintended behaviors, known as "jailbreaks". Being aware of the LLMs' misalignments under specific jailbreak attacks helps us to build safer LLMs. Previous work on jailbreak attacks primarily focused on optimizing adversarial prompts through costly training or improving decoding configurations via parameter search. Both kinds of jailbreak attacks are complicated and time-consuming. Different from them, in this work, we propose two output prefix attack-based jailbreak approaches that can effectively disrupt model alignment: OPRA and OPRATEA. OPRA enforces the output prefix of LLMs to follow a "fuse" and the user's target. Additionally, OPRATEA conceals the malicious target within the input prompt to circumvent the "Maginot Line", a standalone module in the LLM system that focuses on detecting malicious inputs. Both methods are incredibly simple: they do not require any training or parameter search; the setting up of our attack on any LLM only requires a single inference; the attack with any input only requires a string replacement. OPRA and OPRATEA increase the misalignment rate of LLAMA2-7B-CHAT, LLAMA2-13B-CHAT, LLAMA3-8B-INSTRUCT, and GPT-3.5-TURBO, outperforming the state-of-the-art attack with 1000 x lower computational cost.