早上八点到九点属于什么时辰| 五服是什么意思| exp是什么日期| 韩信点兵什么意思| 是什么样的感觉我不懂是什么歌| 肾不好会出现什么症状| 药流是吃什么药| 软文什么意思| 胃囊肿是什么病严重吗| 胃疼屁多是什么原因| 今天什么节日| 通透是什么意思| 浩瀚是什么意思| 脂肪粒是什么| 丙肝是什么病| 矽肺是什么意思| 左眼跳什么预兆| 嗓子干吃什么药| 牙龈肿痛吃什么药| 司法警察是做什么的| 一直发低烧是什么原因| 水粉是什么| 3月份是什么星座| 梦见租房子住是什么意思| 什么飞机| 毕业穿的衣服叫什么| 胃疼喝什么粥| 梵是什么意思| 保家仙都有什么仙| 孕妇耳鸣是什么原因引起的| 什么地唱| 超字五行属什么| 什么叫潮汐车道| 十月十二号是什么星座| 树冠是什么| 破伤风什么情况需要打| 榴莲有什么营养价值| 星期三打喷嚏代表什么| 吃什么解腻| 糖宝是什么意思| 吃什么胎儿眼睛黑又亮| 护照免签是什么意思| 阑尾炎吃什么药最有效| 金价下跌意味着什么| 耳垂长痘痘是什么原因| 什么是零和博弈| 月经过后腰酸疼是什么原因| 自尊是什么意思| 双非是什么| 什么是双高| 大于90度的角是什么角| 吃无花果有什么好处和坏处| 吃桃子对身体有什么好处| 50分贝相当于什么声音| 风属于五行属什么| 8月1日是什么节| 清真食品是什么意思| 壶承是什么| 四六级要带什么| 舌头边缘有齿痕是什么原因| 财年是什么意思| 王加呈念什么| 病毒是什么生物| 支原体阳性是什么意思| 挂彩是什么意思| 坚果补充什么营养成分| 什么枝条| 脂肪是什么颜色| 胃寒吃点什么药| 阿僧只劫是什么意思| 儿童咳嗽吃什么消炎药| 做梦梦到和别人吵架是什么意思| 医生说忌辛辣是指什么| 宫寒应该吃什么怎样调理| 狐臭什么味| 尿微量白蛋白高吃什么药| 乳腺增生吃什么好| 雪媚娘是什么| 11月7日什么星座| 三公经费指什么| 规格是什么| 淋巴是什么东西| 红点是什么原因引起的| 38度吃什么药| 尿道感染要吃什么药才能快速治好| 忻字五行属什么| 胎停是什么原因引起的| 理发师代表什么生肖| 头好出汗是什么原因| 口腔溃疡反反复复是什么原因| 胖大海配什么喝治咽炎| 空气炸锅能做什么| 鼻炎流鼻血是什么原因| 梦见蛇代表什么| 什么时候普及高中义务教育| 检查痛风挂什么科| 九宫八卦是什么意思| 迂回是什么意思| 什么手机拍照效果最好| 蚂蚁咬了用什么药| 排骨搭配什么菜好吃| 黄金有什么用| 道场是什么意思| 床上放什么可以驱蟑螂| 一什么千什么| 琮字五行属什么| 淋球菌是什么病| 向日葵什么时候采摘| dha中文叫什么| 唯美什么意思| 追逐是什么意思| 硬不起来吃什么药| 浅紫色配什么颜色好看| 空气炸锅可以做什么| 称中药的小秤叫什么| 再生纤维素纤维是什么面料| 花椰菜是什么菜| 弥勒佛为什么是未来佛| 风尘是什么意思| 什么是盆地| 海底椰是什么东西| 四月九号是什么星座| 婴儿血小板低是什么原因| 麻醉评估是什么意思| 牛奶丝是什么面料| 艾迪生病是什么病| 下巴长痘痘用什么药| 0604是什么日子| 1月份是什么星座的人| 冒失是什么意思| 豚鼠吃什么食物| 山东都有什么大学| 龙眼有什么品种| 妇科炎症吃什么消炎药效果好| 欧阳修字什么| 海螺姑娘是什么意思| 医生停诊是什么意思| 为什么老是掉头发特别厉害| 梦见自己开车是什么意思| 什么动物最聪明| hpv16是什么| 岑读什么| 春天有什么植物| 松子吃了有什么好处和坏处| 李连杰为什么不娶丁岚| 鳞状上皮炎症反应性改变是什么意思| 6月16日是什么日子| 九牛一毛是什么意思| 梦见生男孩是什么征兆| 山麻雀吃什么| 北极和南极有什么区别| 用什么挠脚心最痒| 为什么不要看电焊火花| 苯磺酸氨氯地平片是什么药| 山药有什么功效| 什么是免疫组化| pdi是什么| blackpink什么意思| 先兆流产什么意思| 简直了是什么意思| 心口窝疼挂什么科| 澄面是什么面粉| 安络血又叫什么名| 行为艺术是什么意思| 掏耳朵咳嗽是什么原因| 湦是什么意思| 家是什么| 什么叫前列腺钙化| 台风是什么| 烫伤用什么消毒| 膝盖咔咔响是什么原因| 什么是双规| 大便为什么是黑色的是什么原因| 劲旅是什么意思| 什么动物没有眼睛| 颈椎头晕吃点什么药| 心悸心慌焦虑吃什么药能缓解| kingtis手表什么牌的| 上半身皮肤痒什么原因| grn什么颜色| 太阳为什么能一直燃烧| 三羊开泰是什么生肖| 龙延香是什么| 霉菌性阴道炎是什么症状| 随餐服用是什么意思| 凉粉果什么时候成熟| 癔症是什么病| 什么的莲蓬| 刘邦和刘秀是什么关系| 牙合是什么字| 麦昆牌子是什么档次| 黑枸杞有什么功效| 榴莲吃多了有什么坏处| 插入阴道是什么感觉| 不服气是什么意思| 拉肚子吃什么食物比较好| 丰都为什么叫鬼城| 潮喷是什么意思| 督察是什么意思| 什么是华盖| 9月14号什么星座| balenciaga是什么牌子| 开小灶是什么意思| 辟谷有什么好处| 12.24是什么星座| 阿咖酚散是什么| 伪娘什么意思| 拉稀是什么原因| 一个点是什么意思| 什么的风雨| 小月子吃什么好| 尿酸高吃什么可以降下去| marlboro是什么烟| 瘪是什么意思| 杨过是什么生肖| 射手座男生喜欢什么样的女生| 尿蛋白可疑阳性是什么意思| 节哀顺便是什么意思| 乳头状瘤是什么病| 喝酸奶有什么好处| 感官世界讲的什么| 经常耳鸣是什么原因引起的| ak是什么意思| 六月二十四是什么星座| 女人平胸是什么原因| 雷公根有什么功效| 茄子与什么食物相克| slf是什么意思| silk什么意思| 什么治疗咽炎效果好| 高筋小麦粉适合做什么| 神助攻什么意思| 穿什么衣服| 蝴蝶骨是什么| 打一个喷嚏代表什么意思| 心率90左右意味着什么| 神父和修女是什么关系| 广州白云区有什么好玩的地方| april什么意思| 优生十项是检查什么| 风团是什么原因引起的| 逻辑性是什么意思| 核磁dwi是什么意思| tt是什么意思| 蹦蹦跳跳的动物是什么生肖| 什么是腺样体面容| 蟑螂卵什么样| 什么是气| 右侧卵巢内囊性回声是什么意思| 阿昔洛韦乳膏治什么| 丑五行属什么| 勒索是什么意思| 小孩睡觉张开嘴巴是什么原因| 当演员需要什么条件| bl是什么意思| 牛奶不能和什么一起吃| 兜售是什么意思| 椎间盘轻度膨出是什么意思| 皮肤一碰就红是什么原因| 春宵一刻值千金什么意思| 阿昔洛韦片治什么病| 哮喘吃什么药最有效| 心动过缓吃什么药最好| 古惑仔为什么不拍了| 百度
blob: d91e88c01b0e84dacc6f728da2d87be1f98d79f4 [file] [log] [blame]
#!/usr/bin/env python3
# Copyright 2017 The Chromium Authors
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import argparse
import colorsys
import difflib
import html
import random
import os
import re
import subprocess
import sys
import tempfile
import textwrap
import webbrowser
class TokenContext(object):
"""Metadata about a token.
Attributes:
row: Row index of the token in the data file.
column: Column index of the token in the data file.
token: The token string.
commit: A Commit object that corresponds to the commit that added
this token.
"""
def __init__(self, row, column, token, commit=None):
self.row = row
self.column = column
self.token = token
self.commit = commit
class Commit(object):
"""Commit data.
Attributes:
hash: The commit hash.
author_name: The author's name.
author_email: the author's email.
author_date: The date and time the author created this commit.
message: The commit message.
diff: The commit diff.
"""
def __init__(self, hash, author_name, author_email, author_date, message,
diff):
self.hash = hash
self.author_name = author_name
self.author_email = author_email
self.author_date = author_date
self.message = message
self.diff = diff
def tokenize_data(data, tokenize_by_char, tokenize_whitespace):
"""Tokenizes |data|.
Args:
data: String to tokenize.
tokenize_by_char: If true, individual characters are treated as tokens.
Otherwise, tokens are either symbols or strings of both alphanumeric
characters and underscores.
tokenize_whitespace: Treat non-newline whitespace characters as tokens.
Returns:
A list of lists of TokenContexts. Each list represents a line.
"""
contexts = []
in_identifier = False
identifier_start = 0
identifier = ''
row = 0
column = 0
line_contexts = []
for c in data:
if not tokenize_by_char and (c.isalnum() or c == '_'):
if in_identifier:
identifier += c
else:
in_identifier = True
identifier_start = column
identifier = c
else:
if in_identifier:
line_contexts.append(TokenContext(row, identifier_start, identifier))
in_identifier = False
if not c.isspace() or (tokenize_whitespace and c != '\n'):
line_contexts.append(TokenContext(row, column, c))
if c == '\n':
row += 1
column = 0
contexts.append(line_contexts)
line_tokens = []
line_contexts = []
else:
column += 1
contexts.append(line_contexts)
return contexts
def compute_unified_diff(old_tokens, new_tokens):
"""Computes the diff between |old_tokens| and |new_tokens|.
Args:
old_tokens: Token strings corresponding to the old data.
new_tokens: Token strings corresponding to the new data.
Returns:
The diff, in unified diff format.
"""
return difflib.unified_diff(old_tokens, new_tokens, n=0, lineterm='')
def parse_chunk_header_file_range(file_range):
"""Parses a chunk header file range.
Diff chunk headers have the form:
@@ -<file-range> +<file-range> @@
File ranges have the form:
<start line number>,<number of lines changed>
Args:
file_range: A chunk header file range.
Returns:
A tuple (range_start, range_end). The endpoints are adjusted such that
iterating over [range_start, range_end) will give the changed indices.
"""
if ',' in file_range:
file_range_parts = file_range.split(',')
start = int(file_range_parts[0])
amount = int(file_range_parts[1])
if amount == 0:
return (start, start)
return (start - 1, start + amount - 1)
else:
return (int(file_range) - 1, int(file_range))
def compute_changed_token_indices(previous_tokens, current_tokens):
"""Computes changed and added tokens.
Args:
previous_tokens: Tokens corresponding to the old file.
current_tokens: Tokens corresponding to the new file.
Returns:
A tuple (added_tokens, changed_tokens).
added_tokens: A list of indices into |current_tokens|.
changed_tokens: A map of indices into |current_tokens| to
indices into |previous_tokens|.
"""
prev_file_chunk_end = 0
prev_patched_chunk_end = 0
added_tokens = []
changed_tokens = {}
for line in compute_unified_diff(previous_tokens, current_tokens):
if line.startswith("@@"):
parts = line.split(' ')
removed = parts[1].lstrip('-')
removed_start, removed_end = parse_chunk_header_file_range(removed)
added = parts[2].lstrip('+')
added_start, added_end = parse_chunk_header_file_range(added)
for i in range(added_start, added_end):
added_tokens.append(i)
for i in range(0, removed_start - prev_patched_chunk_end):
changed_tokens[prev_file_chunk_end + i] = prev_patched_chunk_end + i
prev_patched_chunk_end = removed_end
prev_file_chunk_end = added_end
for i in range(0, len(previous_tokens) - prev_patched_chunk_end):
changed_tokens[prev_file_chunk_end + i] = prev_patched_chunk_end + i
return added_tokens, changed_tokens
def flatten_nested_list(l):
"""Flattens a list and provides a mapping from elements in the list back
into the nested list.
Args:
l: A list of lists.
Returns:
A tuple (flattened, index_to_position):
flattened: The flattened list.
index_to_position: A list of pairs (r, c) such that
index_to_position[i] == (r, c); flattened[i] == l[r][c]
"""
flattened = []
index_to_position = {}
r = 0
c = 0
for nested_list in l:
for element in nested_list:
index_to_position[len(flattened)] = (r, c)
flattened.append(element)
c += 1
r += 1
c = 0
return (flattened, index_to_position)
def compute_changed_token_positions(previous_tokens, current_tokens):
"""Computes changed and added token positions.
Args:
previous_tokens: A list of lists of token strings. Lines in the file
correspond to the nested lists.
current_tokens: A list of lists of token strings. Lines in the file
correspond to the nested lists.
Returns:
A tuple (added_token_positions, changed_token_positions):
added_token_positions: A list of pairs that index into |current_tokens|.
changed_token_positions: A map from pairs that index into
|current_tokens| to pairs that index into |previous_tokens|.
"""
flat_previous_tokens, previous_index_to_position = flatten_nested_list(
previous_tokens)
flat_current_tokens, current_index_to_position = flatten_nested_list(
current_tokens)
added_indices, changed_indices = compute_changed_token_indices(
flat_previous_tokens, flat_current_tokens)
added_token_positions = [current_index_to_position[i] for i in added_indices]
changed_token_positions = {
current_index_to_position[current_i]:
previous_index_to_position[changed_indices[current_i]]
for current_i in changed_indices
}
return (added_token_positions, changed_token_positions)
def parse_chunks_from_diff(diff):
"""Returns a generator of chunk data from a diff.
Args:
diff: A list of strings, with each string being a line from a diff
in unified diff format.
Returns:
A generator of tuples (added_lines_start, added_lines_end, removed_lines)
"""
it = iter(diff)
for line in it:
while not line.startswith('@@'):
try:
line = next(it)
except StopIteration:
return
parts = line.split(' ')
previous_start, previous_end = parse_chunk_header_file_range(
parts[1].lstrip('-'))
current_start, current_end = parse_chunk_header_file_range(
parts[2].lstrip('+'))
in_delta = False
added_lines_start = None
added_lines_end = None
removed_lines = []
while previous_start < previous_end or current_start < current_end:
line = next(it)
firstchar = line[0]
line = line[1:]
if not in_delta and (firstchar == '-' or firstchar == '+'):
in_delta = True
added_lines_start = current_start
added_lines_end = current_start
removed_lines = []
if firstchar == '-':
removed_lines.append(line)
previous_start += 1
elif firstchar == '+':
current_start += 1
added_lines_end = current_start
elif firstchar == ' ':
if in_delta:
in_delta = False
yield (added_lines_start, added_lines_end, removed_lines)
previous_start += 1
current_start += 1
if in_delta:
yield (added_lines_start, added_lines_end, removed_lines)
def should_skip_commit(commit):
"""Decides if |commit| should be skipped when computing the blame.
Commit 5d4451e deleted all files in the repo except for DEPS. The
next commit, 1e7896, brought them back. This is a hack to skip
those commits (except for the files they modified). If we did not
do this, changes would be incorrectly attributed to 1e7896.
Args:
commit: A Commit object.
Returns:
A boolean indicating if this commit should be skipped.
"""
banned_commits = [
'1e78967ed2f1937b3809c19d91e7dd62d756d307',
'5d4451ebf298d9d71f716cc0135f465cec41fcd0',
]
if commit.hash not in banned_commits:
return False
banned_commits_file_exceptions = [
'DEPS',
'chrome/browser/ui/views/file_manager_dialog_browsertest.cc',
]
for line in commit.diff:
if line.startswith('---') or line.startswith('+++'):
if line.split(' ')[1] in banned_commits_file_exceptions:
return False
elif line.startswith('@@'):
return True
assert False
def generate_substrings(file):
"""Generates substrings from a file stream, where substrings are
separated by '\0'.
For example, the input:
'a\0bc\0\0\0d\0'
would produce the output:
['a', 'bc', 'd']
Args:
file: A readable file.
"""
BUF_SIZE = 448 # Experimentally found to be pretty fast.
data = []
while True:
buf = file.read(BUF_SIZE)
parts = buf.split(b'\0')
data.append(parts[0])
if len(parts) > 1:
joined = b''.join(data)
if joined != b'':
yield joined.decode()
for i in range(1, len(parts) - 1):
if parts[i] != b'':
yield parts[i].decode()
data = [parts[-1]]
if len(buf) < BUF_SIZE:
joined = b''.join(data)
if joined != b'':
yield joined.decode()
return
def generate_commits(git_log_stdout):
"""Parses git log output into a stream of Commit objects.
"""
substring_generator = generate_substrings(git_log_stdout)
try:
while True:
hash = next(substring_generator)
author_name = next(substring_generator)
author_email = next(substring_generator)
author_date = next(substring_generator)
message = next(substring_generator).rstrip('\n')
diff = next(substring_generator).split('\n')[1:-1]
yield Commit(hash, author_name, author_email, author_date, message, diff)
except StopIteration:
pass
def uberblame_aux(file_name, git_log_stdout, data, tokenization_method):
"""Computes the uberblame of file |file_name|.
Args:
file_name: File to uberblame.
git_log_stdout: A file object that represents the git log output.
data: A string containing the data of file |file_name|.
tokenization_method: A function that takes a string and returns a list of
TokenContexts.
Returns:
A tuple (data, blame).
data: File contents.
blame: A list of TokenContexts.
"""
blame = tokenization_method(data)
blamed_tokens = 0
uber_blame = (data, blame[:])
for commit in generate_commits(git_log_stdout):
if should_skip_commit(commit):
continue
offset = 0
for (added_lines_start, added_lines_end,
removed_lines) in parse_chunks_from_diff(commit.diff):
added_lines_start += offset
added_lines_end += offset
previous_contexts = [
token_lines
for line_previous in removed_lines
for token_lines in tokenization_method(line_previous)
]
previous_tokens = [[context.token for context in contexts]
for contexts in previous_contexts]
current_contexts = blame[added_lines_start:added_lines_end]
current_tokens = [[context.token for context in contexts]
for contexts in current_contexts]
added_token_positions, changed_token_positions = (
compute_changed_token_positions(previous_tokens, current_tokens))
for r, c in added_token_positions:
current_contexts[r][c].commit = commit
blamed_tokens += 1
for r, c in changed_token_positions:
pr, pc = changed_token_positions[(r, c)]
previous_contexts[pr][pc] = current_contexts[r][c]
assert added_lines_start <= added_lines_end <= len(blame)
current_blame_size = len(blame)
blame[added_lines_start:added_lines_end] = previous_contexts
offset += len(blame) - current_blame_size
assert blame == [] or blame == [[]]
return uber_blame
def uberblame(file_name, revision, tokenization_method):
"""Computes the uberblame of file |file_name|.
Args:
file_name: File to uberblame.
revision: The revision to start the uberblame at.
tokenization_method: A function that takes a string and returns a list of
TokenContexts.
Returns:
A tuple (data, blame).
data: File contents.
blame: A list of TokenContexts.
"""
DIFF_CONTEXT = 3
cmd_git_log = [
'git', 'log', '--minimal', '--no-prefix', '--follow', '-m',
'--first-parent', '-p',
'-U%d' % DIFF_CONTEXT, '-z', '--format=%x00%H%x00%an%x00%ae%x00%ad%x00%B',
revision, '--', file_name
]
git_log = subprocess.Popen(
cmd_git_log, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
data = subprocess.check_output(
['git', 'show', '%s:%s' % (revision, file_name)]).decode()
data, blame = uberblame_aux(file_name, git_log.stdout, data,
tokenization_method)
stderr = git_log.communicate()[1].decode()
if git_log.returncode != 0:
raise subprocess.CalledProcessError(git_log.returncode, cmd_git_log, stderr)
return data, blame
def generate_pastel_color():
"""Generates a random color from a nice looking pastel palette.
Returns:
The color, formatted as hex string. For example, white is "#FFFFFF".
"""
(h, l, s) = (random.uniform(0, 1), random.uniform(0.8, 0.9), random.uniform(
0.5, 1))
(r, g, b) = colorsys.hls_to_rgb(h, l, s)
return "#%0.2X%0.2X%0.2X" % (int(r * 255), int(g * 255), int(b * 255))
def colorize_diff(diff):
"""Colorizes a diff for use in an HTML page.
Args:
diff: The diff, in unified diff format, as a list of line strings.
Returns:
The HTML-formatted diff, as a string. The diff will already be escaped.
"""
colorized = []
for line in diff:
escaped = html.escape(line.replace('\r', ''), quote=True)
if line.startswith('+'):
colorized.append('<span class=\\"addition\\">%s</span>' % escaped)
elif line.startswith('-'):
colorized.append('<span class=\\"deletion\\">%s</span>' % escaped)
elif line.startswith('@@'):
context_begin = escaped.find('@@', 2)
assert context_begin != -1
colorized.append(
'<span class=\\"chunk_meta\\">%s</span>'
'<span class=\\"chunk_context\\">%s</span'
% (escaped[0:context_begin + 2], escaped[context_begin + 2:]))
elif line.startswith('diff') or line.startswith('index'):
colorized.append('<span class=\\"file_header\\">%s</span>' % escaped)
else:
colorized.append('<span class=\\"context_line\\">%s</span>' % escaped)
return '\n'.join(colorized)
def create_visualization(data, blame):
"""Creates a web page to visualize |blame|.
Args:
data: The data file as returned by uberblame().
blame: A list of TokenContexts as returned by uberblame().
Returns:
The HTML for the generated page, as a string.
"""
# Use the same seed for the color generator on each run so that
# loading the same blame of the same file twice will result in the
# same generated HTML page.
random.seed(0x52937865ec62d1ea)
page = """\
<html>
<head>
<style>
body {
font-family: monospace;
}
pre {
display: inline;
}
.token {
outline: 1pt solid #00000030;
outline-offset: -1pt;
cursor: pointer;
}
.addition {
color: #080;
}
.deletion {
color: #c00;
}
.chunk_meta {
color: #099;
}
.context_line .chunk_context {
// Just normal text.
}
.file_header {
font-weight: bold;
}
#linenums {
text-align: right;
}
#file_display {
position: absolute;
left: 0;
top: 0;
width: 50%%;
height: 100%%;
overflow: scroll;
}
#commit_display_container {
position: absolute;
left: 50%%;
top: 0;
width: 50%%;
height: 100%%;
overflow: scroll;
}
</style>
<script>
commit_data = %s;
function display_commit(hash) {
var e = document.getElementById("commit_display");
e.innerHTML = commit_data[hash]
}
</script>
</head>
<body>
<div id="file_display">
<table>
<tbody>
<tr>
<td valign="top" id="linenums">
<pre>%s</pre>
</td>
<td valign="top">
<pre>%s</pre>
</td>
</tr>
</tbody>
</table>
</div>
<div id="commit_display_container" valign="top">
<pre id="commit_display" />
</div>
</body>
</html>
"""
page = textwrap.dedent(page)
commits = {}
lines = []
commit_colors = {}
blame_index = 0
blame = [context for contexts in blame for context in contexts]
row = 0
lastline = ''
for line in data.split('\n'):
lastline = line
column = 0
for c in line + '\n':
if blame_index < len(blame):
token_context = blame[blame_index]
if (row == token_context.row and
column == token_context.column + len(token_context.token)):
if (blame_index + 1 == len(blame) or blame[blame_index].commit.hash !=
blame[blame_index + 1].commit.hash):
lines.append('</span>')
blame_index += 1
if blame_index < len(blame):
token_context = blame[blame_index]
if row == token_context.row and column == token_context.column:
if (blame_index == 0 or blame[blame_index - 1].commit.hash !=
blame[blame_index].commit.hash):
hash = token_context.commit.hash
commits[hash] = token_context.commit
if hash not in commit_colors:
commit_colors[hash] = generate_pastel_color()
color = commit_colors[hash]
lines.append(('<span class="token" style="background-color: %s" ' +
'onclick="display_commit(&quot;%s&quot;)">') % (color,
hash))
lines.append(html.escape(c))
column += 1
row += 1
commit_data = ['{\n']
commit_display_format = """\
commit: {hash}
Author: {author_name} <{author_email}>
Date: {author_date}
{message}
"""
commit_display_format = textwrap.dedent(commit_display_format)
links = re.compile(r'(http?:\/\/\S+)')
for hash in commits:
commit = commits[hash]
commit_display = commit_display_format.format(
hash=hash,
author_name=commit.author_name,
author_email=commit.author_email,
author_date=commit.author_date,
message=commit.message)
commit_display = html.escape(commit_display, quote=True)
commit_display += colorize_diff(commit.diff)
commit_display = re.sub(links, '<a href=\\"\\1\\">\\1</a>', commit_display)
commit_display = commit_display.replace('\n', '\\n')
commit_data.append('"%s": "%s",\n' % (hash, commit_display))
commit_data.append('}')
commit_data = ''.join(commit_data)
line_nums = range(1, row if lastline.strip() == '' else row + 1)
line_nums = '\n'.join([str(num) for num in line_nums])
lines = ''.join(lines)
return page % (commit_data, line_nums, lines)
def show_visualization(page):
"""Display |html| in a web browser.
Args:
html: The contents of the file to display, as a string.
"""
# Keep the temporary file around so the browser has time to open it.
# TODO(thomasanderson): spin up a temporary web server to serve this
# file so we don't have to leak it.
html_file = tempfile.NamedTemporaryFile(delete=False, suffix='.html')
html_file.write(page.encode())
html_file.flush()
if sys.platform.startswith('linux'):
# Don't show any messages when starting the browser.
saved_stdout = os.dup(1)
saved_stderr = os.dup(2)
os.close(1)
os.close(2)
os.open(os.devnull, os.O_RDWR)
os.open(os.devnull, os.O_RDWR)
webbrowser.open('file://' + html_file.name)
if sys.platform.startswith('linux'):
os.dup2(saved_stdout, 1)
os.dup2(saved_stderr, 2)
os.close(saved_stdout)
os.close(saved_stderr)
def main(argv):
parser = argparse.ArgumentParser(
description='Show what revision last modified each token of a file.')
parser.add_argument(
'revision',
default='HEAD',
nargs='?',
help='show only commits starting from a revision')
parser.add_argument('file', help='the file to uberblame')
parser.add_argument(
'--skip-visualization',
action='store_true',
help='do not display the blame visualization in a web browser')
parser.add_argument(
'--tokenize-by-char',
action='store_true',
help='treat individual characters as tokens')
parser.add_argument(
'--tokenize-whitespace',
action='store_true',
help='also blame non-newline whitespace characters')
args = parser.parse_args(argv)
def tokenization_method(data):
return tokenize_data(data, args.tokenize_by_char, args.tokenize_whitespace)
data, blame = uberblame(args.file, args.revision, tokenization_method)
html = create_visualization(data, blame)
if not args.skip_visualization:
show_visualization(html)
return 0
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
脚后跟痛是什么问题 今天穿什么衣服合适 六月二十七是什么日子 中元节与什么生肖有关 丁香茶有什么作用和功效
去加一笔是什么字 丿是什么字 幼儿园中班学什么 厌氧菌是什么 相爱相杀是什么意思
紫米是什么米 踢馆什么意思 额头疼是什么原因 什么能助睡眠 左胸上方隐痛什么原因
突然抽搐是什么原因 口水臭吃什么药 甲亢什么不能吃 什么什么言什么 中午12点是什么时辰
什么是天乙贵人hcv8jop8ns2r.cn 一什么池塘hcv9jop4ns4r.cn 小儿流清鼻涕吃什么药效果好hcv8jop1ns1r.cn 朱砂痣是什么hcv8jop0ns8r.cn 应无所住什么意思helloaicloud.com
1年是什么朝代hcv7jop7ns2r.cn 黄金糕是什么做的hcv8jop7ns6r.cn 香港有什么好吃的hcv8jop7ns7r.cn 凯格尔运动是什么hcv7jop6ns3r.cn 立刀旁的字和什么有关hcv8jop6ns8r.cn
梦见蚯蚓是什么预兆zhiyanzhang.com 什么是马赛克hcv8jop6ns1r.cn 11月27日是什么星座hcv7jop5ns1r.cn 86年属什么hcv8jop3ns5r.cn 欲盖弥彰是什么意思hcv9jop1ns8r.cn
洗衣机什么品牌好hcv9jop7ns2r.cn 肝功能不全是什么意思hcv9jop7ns9r.cn 寒号鸟是什么动物hcv8jop3ns4r.cn 7月27号是什么星座hcv9jop2ns8r.cn 脾的作用是什么hcv7jop7ns4r.cn
百度