i want return dense tensor of non-zero indices each row. example, given tensors:
[0,1,1] [1,0,0] [0,0,1] [0,1,0]
should return
[1,2] [0] [2] [1]
i can indices using tf.where(), not know how combine results based on first index. example:
graph = tf.graph() graph.as_default(): data = tf.constant([[0,1,1],[1,0,0],[0,0,1],[0,1,0]]) indices = tf.where(tf.not_equal(data,0)) sess = tf.interactivesession(graph=graph) sess.run(tf.local_variables_initializer()) print(sess.run([indices]))
the above code returns:
[array([[0, 1], [0, 2], [1, 0], [2, 2], [3, 1]])]
however, combine result based on first column of these indices. can suggest way this?
update
trying work larger number of dimensions , running error. if run code below on matrix
sess = tf.interactivesession() = tf.constant([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) row_counts = tf.reduce_sum(a, axis=1) max_padding = tf.reduce_max(row_counts) extra_padding = max_padding - row_counts extra_padding_col = tf.expand_dims(extra_padding, 1) range_row = tf.expand_dims(tf.range(max_padding), 0) padding_array = tf.cast(tf.tile(range_row, [9, 1])<extra_padding_col, tf.int32) b = tf.concat([a, padding_array], axis=1) result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), b) result = tf.where(result<=max_padding, result, -1*tf.ones_like(result)) # replace -1's result = tf.reshape(result, (int(result.get_shape()[0]), max_padding)) result.eval()
then many -1's solution seems not quite there:
[[ 1, 2], [ 2, -1], [-1, -1], [-1, -1], [-1, -1], [-1, -1], [-1, -1], [-1, -1], [ 0, -1]]
notice in example, output not matrix jagged array. jagged arrays have limited support in tensorflow (through tensorarray), it's more convenient deal rectangular arrays. pad each row -1's make output rectangular
suppose output rectangular, without padding use map_fn
follows
tf.reset_default_graph() sess = tf.interactivesession() = tf.constant([[0,1,1],[1,1,0],[1,0,1],[1,1,0]]) # cast needed because map_fn likes keep same dtype, tf.where returns int64 result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), a) # remove level of nesting sess.run(tf.reshape(result, (4, 2)))
output is
array([[1, 2], [0, 1], [0, 2], [0, 1]], dtype=int32)
when padding needed, this
sess = tf.interactivesession() = tf.constant([[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 1], [0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0, 0, 0, 1]]) row_counts = tf.reduce_sum(a, axis=1) max_padding = tf.reduce_max(row_counts) max_index = int(a.get_shape()[1]) extra_padding = max_padding - row_counts extra_padding_col = tf.expand_dims(extra_padding, 1) range_row = tf.expand_dims(tf.range(max_padding), 0) num_rows = tf.squeeze(tf.shape(a)[0]) padding_array = tf.cast(tf.tile(range_row, [num_rows, 1])<extra_padding_col, tf.int32) b = tf.concat([a, padding_array], axis=1) result = tf.map_fn(lambda x: tf.cast(tf.where(tf.not_equal(x, 0)), tf.int32), b) result = tf.where(result<max_index, result, -1*tf.ones_like(result)) # replace -1's result = tf.reshape(result, (int(result.get_shape()[0]), max_padding)) result.eval()
this should produce
array([[ 1, 2], [ 2, -1], [ 4, -1], [ 5, 6], [ 6, -1], [ 7, 9], [ 8, -1], [ 9, -1], [ 0, 9]], dtype=int32)
Comments
Post a Comment